00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2455 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3720 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.277 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.277 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.008 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.019 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.029 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.029 > git config core.sparsecheckout # timeout=10 00:00:07.040 > git read-tree -mu HEAD # timeout=10 00:00:07.054 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.075 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.075 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.153 [Pipeline] Start of Pipeline 00:00:07.165 [Pipeline] library 00:00:07.168 Loading library shm_lib@master 00:00:07.168 Library shm_lib@master is cached. Copying from home. 00:00:07.181 [Pipeline] node 00:00:07.204 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.206 [Pipeline] { 00:00:07.216 [Pipeline] catchError 00:00:07.218 [Pipeline] { 00:00:07.228 [Pipeline] wrap 00:00:07.235 [Pipeline] { 00:00:07.241 [Pipeline] stage 00:00:07.242 [Pipeline] { (Prologue) 00:00:07.438 [Pipeline] sh 00:00:08.305 + logger -p user.info -t JENKINS-CI 00:00:08.336 [Pipeline] echo 00:00:08.338 Node: WFP4 00:00:08.344 [Pipeline] sh 00:00:08.690 [Pipeline] setCustomBuildProperty 00:00:08.702 [Pipeline] echo 00:00:08.703 Cleanup processes 00:00:08.706 [Pipeline] sh 00:00:09.007 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.007 6057 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.021 [Pipeline] sh 00:00:09.310 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.310 ++ grep -v 'sudo pgrep' 00:00:09.310 ++ awk '{print $1}' 00:00:09.310 + sudo kill -9 00:00:09.310 + true 00:00:09.326 [Pipeline] cleanWs 00:00:09.336 [WS-CLEANUP] Deleting project workspace... 00:00:09.336 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.348 [WS-CLEANUP] done 00:00:09.352 [Pipeline] setCustomBuildProperty 00:00:09.367 [Pipeline] sh 00:00:09.656 + sudo git config --global --replace-all safe.directory '*' 00:00:09.751 [Pipeline] httpRequest 00:00:11.664 [Pipeline] echo 00:00:11.666 Sorcerer 10.211.164.20 is alive 00:00:11.675 [Pipeline] retry 00:00:11.677 [Pipeline] { 00:00:11.691 [Pipeline] httpRequest 00:00:11.695 HttpMethod: GET 00:00:11.696 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.697 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.718 Response Code: HTTP/1.1 200 OK 00:00:11.718 Success: Status code 200 is in the accepted range: 200,404 00:00:11.719 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.146 [Pipeline] } 00:00:30.164 [Pipeline] // retry 00:00:30.171 [Pipeline] sh 00:00:30.462 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.479 [Pipeline] httpRequest 00:00:30.979 [Pipeline] echo 00:00:30.981 Sorcerer 10.211.164.20 is alive 00:00:30.990 [Pipeline] retry 00:00:30.992 [Pipeline] { 00:00:31.006 [Pipeline] httpRequest 00:00:31.011 HttpMethod: GET 00:00:31.012 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:31.013 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:31.023 Response Code: HTTP/1.1 200 OK 00:00:31.023 Success: Status code 200 is in the accepted range: 200,404 00:00:31.024 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:50.704 [Pipeline] } 00:01:50.721 [Pipeline] // retry 00:01:50.729 [Pipeline] sh 00:01:51.023 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:53.579 [Pipeline] sh 00:01:53.866 + git -C spdk log --oneline -n5 00:01:53.866 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:53.866 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:53.866 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:53.866 66289a6db build: use VERSION file for storing version 00:01:53.866 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:53.883 [Pipeline] withCredentials 00:01:53.895 > git --version # timeout=10 00:01:53.907 > git --version # 'git version 2.39.2' 00:01:53.930 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:53.932 [Pipeline] { 00:01:53.941 [Pipeline] retry 00:01:53.942 [Pipeline] { 00:01:53.956 [Pipeline] sh 00:01:54.510 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:54.784 [Pipeline] } 00:01:54.802 [Pipeline] // retry 00:01:54.807 [Pipeline] } 00:01:54.823 [Pipeline] // withCredentials 00:01:54.833 [Pipeline] httpRequest 00:01:55.223 [Pipeline] echo 00:01:55.225 Sorcerer 10.211.164.20 is alive 00:01:55.234 [Pipeline] retry 00:01:55.236 [Pipeline] { 00:01:55.249 [Pipeline] httpRequest 00:01:55.254 HttpMethod: GET 00:01:55.254 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:55.255 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:55.259 Response Code: HTTP/1.1 200 OK 00:01:55.259 Success: Status code 200 is in the accepted range: 200,404 00:01:55.259 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:59.589 [Pipeline] } 00:01:59.607 [Pipeline] // retry 00:01:59.615 [Pipeline] sh 00:01:59.904 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.299 [Pipeline] sh 00:02:01.587 + git -C dpdk log --oneline -n5 00:02:01.587 caf0f5d395 version: 22.11.4 00:02:01.587 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:01.587 dc9c799c7d vhost: fix missing spinlock unlock 00:02:01.587 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:01.587 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:01.597 [Pipeline] } 00:02:01.610 [Pipeline] // stage 00:02:01.618 [Pipeline] stage 00:02:01.620 [Pipeline] { (Prepare) 00:02:01.639 [Pipeline] writeFile 00:02:01.654 [Pipeline] sh 00:02:01.944 + logger -p user.info -t JENKINS-CI 00:02:01.955 [Pipeline] sh 00:02:02.242 + logger -p user.info -t JENKINS-CI 00:02:02.254 [Pipeline] sh 00:02:02.540 + cat autorun-spdk.conf 00:02:02.540 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.540 SPDK_TEST_NVMF=1 00:02:02.540 SPDK_TEST_NVME_CLI=1 00:02:02.540 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.540 SPDK_TEST_NVMF_NICS=e810 00:02:02.540 SPDK_TEST_VFIOUSER=1 00:02:02.540 SPDK_RUN_UBSAN=1 00:02:02.540 NET_TYPE=phy 00:02:02.540 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:02.540 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.548 RUN_NIGHTLY=1 00:02:02.552 [Pipeline] readFile 00:02:02.584 [Pipeline] withEnv 00:02:02.586 [Pipeline] { 00:02:02.597 [Pipeline] sh 00:02:02.886 + set -ex 00:02:02.886 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:02.886 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:02.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.886 ++ SPDK_TEST_NVMF=1 00:02:02.886 ++ SPDK_TEST_NVME_CLI=1 00:02:02.886 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.886 ++ SPDK_TEST_NVMF_NICS=e810 00:02:02.886 ++ SPDK_TEST_VFIOUSER=1 00:02:02.886 ++ SPDK_RUN_UBSAN=1 00:02:02.886 ++ NET_TYPE=phy 00:02:02.886 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:02.886 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.886 ++ RUN_NIGHTLY=1 00:02:02.886 + case $SPDK_TEST_NVMF_NICS in 00:02:02.886 + DRIVERS=ice 00:02:02.886 + [[ tcp == \r\d\m\a ]] 00:02:02.886 + [[ -n ice ]] 00:02:02.886 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:02.886 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:02.886 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:02.886 rmmod: ERROR: Module i40iw is not currently loaded 00:02:02.886 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:02.886 + true 00:02:02.886 + for D in $DRIVERS 00:02:02.886 + sudo modprobe ice 00:02:02.886 + exit 0 00:02:02.896 [Pipeline] } 00:02:02.908 [Pipeline] // withEnv 00:02:02.913 [Pipeline] } 00:02:02.924 [Pipeline] // stage 00:02:02.932 [Pipeline] catchError 00:02:02.933 [Pipeline] { 00:02:02.946 [Pipeline] timeout 00:02:02.946 Timeout set to expire in 1 hr 0 min 00:02:02.947 [Pipeline] { 00:02:02.960 [Pipeline] stage 00:02:02.961 [Pipeline] { (Tests) 00:02:02.974 [Pipeline] sh 00:02:03.263 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.263 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.263 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.263 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:03.263 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.263 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:03.263 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:03.263 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:03.263 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:03.263 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:03.263 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:03.263 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.263 + source /etc/os-release 00:02:03.263 ++ NAME='Fedora Linux' 00:02:03.263 ++ VERSION='39 (Cloud Edition)' 00:02:03.263 ++ ID=fedora 00:02:03.263 ++ VERSION_ID=39 00:02:03.263 ++ VERSION_CODENAME= 00:02:03.263 ++ PLATFORM_ID=platform:f39 00:02:03.263 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.263 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.263 ++ LOGO=fedora-logo-icon 00:02:03.263 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.263 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.263 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.263 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.263 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.263 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.263 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.263 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.263 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.263 ++ SUPPORT_END=2024-11-12 00:02:03.263 ++ VARIANT='Cloud Edition' 00:02:03.263 ++ VARIANT_ID=cloud 00:02:03.263 + uname -a 00:02:03.263 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:03.263 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:05.804 Hugepages 00:02:05.804 node hugesize free / total 00:02:05.804 node0 1048576kB 0 / 0 00:02:05.804 node0 2048kB 0 / 0 00:02:05.804 node1 1048576kB 0 / 0 00:02:05.804 node1 2048kB 0 / 0 00:02:05.804 00:02:05.804 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.804 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:05.804 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:05.804 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:05.804 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:05.804 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:05.805 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:05.805 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:05.805 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:05.805 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:05.805 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:05.805 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:05.805 + rm -f /tmp/spdk-ld-path 00:02:05.805 + source autorun-spdk.conf 00:02:05.805 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.805 ++ SPDK_TEST_NVMF=1 00:02:05.805 ++ SPDK_TEST_NVME_CLI=1 00:02:05.805 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.805 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.805 ++ SPDK_TEST_VFIOUSER=1 00:02:05.805 ++ SPDK_RUN_UBSAN=1 00:02:05.805 ++ NET_TYPE=phy 00:02:05.805 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.805 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.805 ++ RUN_NIGHTLY=1 00:02:05.805 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.805 + [[ -n '' ]] 00:02:05.805 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.805 + for M in /var/spdk/build-*-manifest.txt 00:02:05.805 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.805 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.805 + for M in /var/spdk/build-*-manifest.txt 00:02:05.805 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.805 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.805 + for M in /var/spdk/build-*-manifest.txt 00:02:05.805 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.805 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.805 ++ uname 00:02:05.805 + [[ Linux == \L\i\n\u\x ]] 00:02:05.805 + sudo dmesg -T 00:02:05.805 + sudo dmesg --clear 00:02:05.805 + dmesg_pid=7546 00:02:05.805 + [[ Fedora Linux == FreeBSD ]] 00:02:05.805 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.805 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.805 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.805 + sudo dmesg -Tw 00:02:05.805 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.805 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.805 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.805 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.805 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.805 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.805 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.805 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.805 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.805 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.805 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.805 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.805 12:07:33 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:05.805 12:07:33 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.805 12:07:33 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:05.805 12:07:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:05.805 12:07:33 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:06.065 12:07:33 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:06.065 12:07:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:06.065 12:07:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:06.065 12:07:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.065 12:07:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.065 12:07:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.065 12:07:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.065 12:07:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.065 12:07:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.065 12:07:33 -- paths/export.sh@5 -- $ export PATH 00:02:06.065 12:07:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.065 12:07:33 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:06.065 12:07:33 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:06.065 12:07:33 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734088053.XXXXXX 00:02:06.065 12:07:33 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734088053.bG5r8p 00:02:06.065 12:07:33 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:06.065 12:07:33 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:06.065 12:07:33 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.065 12:07:33 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:06.065 12:07:33 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:06.065 12:07:33 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.065 12:07:33 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:06.065 12:07:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:06.065 12:07:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.065 12:07:33 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:06.065 12:07:33 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:06.065 12:07:33 -- pm/common@17 -- $ local monitor 00:02:06.065 12:07:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.065 12:07:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.065 12:07:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.065 12:07:33 -- pm/common@21 -- $ date +%s 00:02:06.065 12:07:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.065 12:07:33 -- pm/common@21 -- $ date +%s 00:02:06.065 12:07:33 -- pm/common@25 -- $ sleep 1 00:02:06.065 12:07:33 -- pm/common@21 -- $ date +%s 00:02:06.065 12:07:33 -- pm/common@21 -- $ date +%s 00:02:06.065 12:07:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734088053 00:02:06.065 12:07:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734088053 00:02:06.065 12:07:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734088053 00:02:06.065 12:07:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734088053 00:02:06.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734088053_collect-cpu-load.pm.log 00:02:06.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734088053_collect-vmstat.pm.log 00:02:06.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734088053_collect-cpu-temp.pm.log 00:02:06.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734088053_collect-bmc-pm.bmc.pm.log 00:02:07.005 12:07:34 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:07.005 12:07:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.005 12:07:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.005 12:07:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.005 12:07:34 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.005 Fri Dec 13 11:07:34 AM UTC 2024 00:02:07.005 12:07:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.005 v25.01-rc1-2-ge01cb43b8 00:02:07.005 12:07:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:07.005 12:07:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.005 12:07:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.005 12:07:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:07.005 12:07:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:07.005 12:07:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.005 ************************************ 00:02:07.005 START TEST ubsan 00:02:07.005 ************************************ 00:02:07.005 12:07:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:07.005 using ubsan 00:02:07.005 00:02:07.005 real 0m0.000s 00:02:07.005 user 0m0.000s 00:02:07.005 sys 0m0.000s 00:02:07.005 12:07:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:07.005 12:07:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.005 ************************************ 00:02:07.005 END TEST ubsan 00:02:07.005 ************************************ 00:02:07.266 12:07:34 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:07.266 12:07:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:07.266 12:07:34 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:07.266 12:07:34 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:07.266 12:07:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:07.266 12:07:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.266 ************************************ 00:02:07.266 START TEST build_native_dpdk 00:02:07.266 ************************************ 00:02:07.266 12:07:34 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:07.266 caf0f5d395 version: 22.11.4 00:02:07.266 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:07.266 dc9c799c7d vhost: fix missing spinlock unlock 00:02:07.266 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:07.266 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:07.266 patching file config/rte_config.h 00:02:07.266 Hunk #1 succeeded at 60 (offset 1 line). 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:07.266 patching file lib/pcapng/rte_pcapng.c 00:02:07.266 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:07.266 12:07:34 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:07.266 12:07:34 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:13.850 The Meson build system 00:02:13.850 Version: 1.5.0 00:02:13.850 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:13.850 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:13.850 Build type: native build 00:02:13.850 Program cat found: YES (/usr/bin/cat) 00:02:13.850 Project name: DPDK 00:02:13.850 Project version: 22.11.4 00:02:13.850 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.850 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:13.850 Host machine cpu family: x86_64 00:02:13.850 Host machine cpu: x86_64 00:02:13.850 Message: ## Building in Developer Mode ## 00:02:13.850 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.850 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:13.850 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.850 Program objdump found: YES (/usr/bin/objdump) 00:02:13.850 Program python3 found: YES (/usr/bin/python3) 00:02:13.850 Program cat found: YES (/usr/bin/cat) 00:02:13.850 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:13.850 Checking for size of "void *" : 8 00:02:13.850 Checking for size of "void *" : 8 (cached) 00:02:13.850 Library m found: YES 00:02:13.850 Library numa found: YES 00:02:13.850 Has header "numaif.h" : YES 00:02:13.850 Library fdt found: NO 00:02:13.850 Library execinfo found: NO 00:02:13.850 Has header "execinfo.h" : YES 00:02:13.850 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.850 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.850 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.850 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.850 Run-time dependency openssl found: YES 3.1.1 00:02:13.850 Run-time dependency libpcap found: YES 1.10.4 00:02:13.850 Has header "pcap.h" with dependency libpcap: YES 00:02:13.850 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.850 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.850 Compiler for C supports arguments -Wformat: YES 00:02:13.850 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.850 Compiler for C supports arguments -Wformat-security: NO 00:02:13.850 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.850 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.850 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.850 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.850 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.850 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.850 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.850 Compiler for C supports arguments -Wundef: YES 00:02:13.850 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.850 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.850 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.850 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.850 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.850 Compiler for C supports arguments -mavx512f: YES 00:02:13.850 Checking if "AVX512 checking" compiles: YES 00:02:13.850 Fetching value of define "__SSE4_2__" : 1 00:02:13.850 Fetching value of define "__AES__" : 1 00:02:13.850 Fetching value of define "__AVX__" : 1 00:02:13.850 Fetching value of define "__AVX2__" : 1 00:02:13.850 Fetching value of define "__AVX512BW__" : 1 00:02:13.850 Fetching value of define "__AVX512CD__" : 1 00:02:13.850 Fetching value of define "__AVX512DQ__" : 1 00:02:13.850 Fetching value of define "__AVX512F__" : 1 00:02:13.850 Fetching value of define "__AVX512VL__" : 1 00:02:13.850 Fetching value of define "__PCLMUL__" : 1 00:02:13.850 Fetching value of define "__RDRND__" : 1 00:02:13.850 Fetching value of define "__RDSEED__" : 1 00:02:13.850 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.850 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.850 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.850 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.850 Checking for function "getentropy" : YES 00:02:13.850 Message: lib/eal: Defining dependency "eal" 00:02:13.850 Message: lib/ring: Defining dependency "ring" 00:02:13.850 Message: lib/rcu: Defining dependency "rcu" 00:02:13.850 Message: lib/mempool: Defining dependency "mempool" 00:02:13.850 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.850 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.850 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.850 Compiler for C supports arguments -mpclmul: YES 00:02:13.850 Compiler for C supports arguments -maes: YES 00:02:13.850 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.850 Compiler for C supports arguments -mavx512bw: YES 00:02:13.850 Compiler for C supports arguments -mavx512dq: YES 00:02:13.850 Compiler for C supports arguments -mavx512vl: YES 00:02:13.850 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.850 Compiler for C supports arguments -mavx2: YES 00:02:13.850 Compiler for C supports arguments -mavx: YES 00:02:13.850 Message: lib/net: Defining dependency "net" 00:02:13.850 Message: lib/meter: Defining dependency "meter" 00:02:13.850 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.850 Message: lib/pci: Defining dependency "pci" 00:02:13.850 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.850 Message: lib/metrics: Defining dependency "metrics" 00:02:13.850 Message: lib/hash: Defining dependency "hash" 00:02:13.850 Message: lib/timer: Defining dependency "timer" 00:02:13.850 Fetching value of define "__AVX2__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.850 Message: lib/acl: Defining dependency "acl" 00:02:13.850 Message: lib/bbdev: Defining dependency "bbdev" 00:02:13.850 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:13.850 Run-time dependency libelf found: YES 0.191 00:02:13.850 Message: lib/bpf: Defining dependency "bpf" 00:02:13.850 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:13.850 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.850 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.850 Message: lib/distributor: Defining dependency "distributor" 00:02:13.850 Message: lib/efd: Defining dependency "efd" 00:02:13.850 Message: lib/eventdev: Defining dependency "eventdev" 00:02:13.850 Message: lib/gpudev: Defining dependency "gpudev" 00:02:13.850 Message: lib/gro: Defining dependency "gro" 00:02:13.850 Message: lib/gso: Defining dependency "gso" 00:02:13.850 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:13.850 Message: lib/jobstats: Defining dependency "jobstats" 00:02:13.850 Message: lib/latencystats: Defining dependency "latencystats" 00:02:13.850 Message: lib/lpm: Defining dependency "lpm" 00:02:13.850 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:13.850 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:13.850 Message: lib/member: Defining dependency "member" 00:02:13.850 Message: lib/pcapng: Defining dependency "pcapng" 00:02:13.850 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.850 Message: lib/power: Defining dependency "power" 00:02:13.850 Message: lib/rawdev: Defining dependency "rawdev" 00:02:13.850 Message: lib/regexdev: Defining dependency "regexdev" 00:02:13.850 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.850 Message: lib/rib: Defining dependency "rib" 00:02:13.850 Message: lib/reorder: Defining dependency "reorder" 00:02:13.850 Message: lib/sched: Defining dependency "sched" 00:02:13.850 Message: lib/security: Defining dependency "security" 00:02:13.850 Message: lib/stack: Defining dependency "stack" 00:02:13.850 Has header "linux/userfaultfd.h" : YES 00:02:13.850 Message: lib/vhost: Defining dependency "vhost" 00:02:13.850 Message: lib/ipsec: Defining dependency "ipsec" 00:02:13.850 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.850 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.850 Message: lib/fib: Defining dependency "fib" 00:02:13.850 Message: lib/port: Defining dependency "port" 00:02:13.850 Message: lib/pdump: Defining dependency "pdump" 00:02:13.850 Message: lib/table: Defining dependency "table" 00:02:13.850 Message: lib/pipeline: Defining dependency "pipeline" 00:02:13.850 Message: lib/graph: Defining dependency "graph" 00:02:13.850 Message: lib/node: Defining dependency "node" 00:02:13.850 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.851 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.851 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.851 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.851 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:13.851 Compiler for C supports arguments -Wno-unused-value: YES 00:02:13.851 Compiler for C supports arguments -Wno-format: YES 00:02:13.851 Compiler for C supports arguments -Wno-format-security: YES 00:02:13.851 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:14.424 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:14.424 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:14.424 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:14.424 Fetching value of define "__AVX2__" : 1 (cached) 00:02:14.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.424 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.424 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.424 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.424 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:14.424 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:14.424 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.424 Configuring doxy-api.conf using configuration 00:02:14.424 Program sphinx-build found: NO 00:02:14.424 Configuring rte_build_config.h using configuration 00:02:14.424 Message: 00:02:14.424 ================= 00:02:14.424 Applications Enabled 00:02:14.424 ================= 00:02:14.424 00:02:14.424 apps: 00:02:14.424 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:14.424 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:14.424 test-security-perf, 00:02:14.424 00:02:14.424 Message: 00:02:14.424 ================= 00:02:14.424 Libraries Enabled 00:02:14.424 ================= 00:02:14.424 00:02:14.424 libs: 00:02:14.424 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:14.424 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:14.424 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:14.424 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:14.424 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:14.424 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:14.424 table, pipeline, graph, node, 00:02:14.424 00:02:14.424 Message: 00:02:14.424 =============== 00:02:14.424 Drivers Enabled 00:02:14.424 =============== 00:02:14.424 00:02:14.424 common: 00:02:14.424 00:02:14.424 bus: 00:02:14.424 pci, vdev, 00:02:14.424 mempool: 00:02:14.424 ring, 00:02:14.424 dma: 00:02:14.424 00:02:14.424 net: 00:02:14.424 i40e, 00:02:14.424 raw: 00:02:14.424 00:02:14.424 crypto: 00:02:14.424 00:02:14.424 compress: 00:02:14.424 00:02:14.424 regex: 00:02:14.424 00:02:14.424 vdpa: 00:02:14.424 00:02:14.424 event: 00:02:14.424 00:02:14.424 baseband: 00:02:14.424 00:02:14.424 gpu: 00:02:14.424 00:02:14.424 00:02:14.424 Message: 00:02:14.424 ================= 00:02:14.424 Content Skipped 00:02:14.424 ================= 00:02:14.424 00:02:14.424 apps: 00:02:14.424 00:02:14.424 libs: 00:02:14.424 kni: explicitly disabled via build config (deprecated lib) 00:02:14.424 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:14.424 00:02:14.424 drivers: 00:02:14.424 common/cpt: not in enabled drivers build config 00:02:14.424 common/dpaax: not in enabled drivers build config 00:02:14.424 common/iavf: not in enabled drivers build config 00:02:14.424 common/idpf: not in enabled drivers build config 00:02:14.424 common/mvep: not in enabled drivers build config 00:02:14.424 common/octeontx: not in enabled drivers build config 00:02:14.424 bus/auxiliary: not in enabled drivers build config 00:02:14.424 bus/dpaa: not in enabled drivers build config 00:02:14.424 bus/fslmc: not in enabled drivers build config 00:02:14.424 bus/ifpga: not in enabled drivers build config 00:02:14.424 bus/vmbus: not in enabled drivers build config 00:02:14.424 common/cnxk: not in enabled drivers build config 00:02:14.424 common/mlx5: not in enabled drivers build config 00:02:14.424 common/qat: not in enabled drivers build config 00:02:14.424 common/sfc_efx: not in enabled drivers build config 00:02:14.424 mempool/bucket: not in enabled drivers build config 00:02:14.424 mempool/cnxk: not in enabled drivers build config 00:02:14.424 mempool/dpaa: not in enabled drivers build config 00:02:14.424 mempool/dpaa2: not in enabled drivers build config 00:02:14.424 mempool/octeontx: not in enabled drivers build config 00:02:14.424 mempool/stack: not in enabled drivers build config 00:02:14.424 dma/cnxk: not in enabled drivers build config 00:02:14.424 dma/dpaa: not in enabled drivers build config 00:02:14.424 dma/dpaa2: not in enabled drivers build config 00:02:14.424 dma/hisilicon: not in enabled drivers build config 00:02:14.424 dma/idxd: not in enabled drivers build config 00:02:14.424 dma/ioat: not in enabled drivers build config 00:02:14.424 dma/skeleton: not in enabled drivers build config 00:02:14.424 net/af_packet: not in enabled drivers build config 00:02:14.424 net/af_xdp: not in enabled drivers build config 00:02:14.425 net/ark: not in enabled drivers build config 00:02:14.425 net/atlantic: not in enabled drivers build config 00:02:14.425 net/avp: not in enabled drivers build config 00:02:14.425 net/axgbe: not in enabled drivers build config 00:02:14.425 net/bnx2x: not in enabled drivers build config 00:02:14.425 net/bnxt: not in enabled drivers build config 00:02:14.425 net/bonding: not in enabled drivers build config 00:02:14.425 net/cnxk: not in enabled drivers build config 00:02:14.425 net/cxgbe: not in enabled drivers build config 00:02:14.425 net/dpaa: not in enabled drivers build config 00:02:14.425 net/dpaa2: not in enabled drivers build config 00:02:14.425 net/e1000: not in enabled drivers build config 00:02:14.425 net/ena: not in enabled drivers build config 00:02:14.425 net/enetc: not in enabled drivers build config 00:02:14.425 net/enetfec: not in enabled drivers build config 00:02:14.425 net/enic: not in enabled drivers build config 00:02:14.425 net/failsafe: not in enabled drivers build config 00:02:14.425 net/fm10k: not in enabled drivers build config 00:02:14.425 net/gve: not in enabled drivers build config 00:02:14.425 net/hinic: not in enabled drivers build config 00:02:14.425 net/hns3: not in enabled drivers build config 00:02:14.425 net/iavf: not in enabled drivers build config 00:02:14.425 net/ice: not in enabled drivers build config 00:02:14.425 net/idpf: not in enabled drivers build config 00:02:14.425 net/igc: not in enabled drivers build config 00:02:14.425 net/ionic: not in enabled drivers build config 00:02:14.425 net/ipn3ke: not in enabled drivers build config 00:02:14.425 net/ixgbe: not in enabled drivers build config 00:02:14.425 net/kni: not in enabled drivers build config 00:02:14.425 net/liquidio: not in enabled drivers build config 00:02:14.425 net/mana: not in enabled drivers build config 00:02:14.425 net/memif: not in enabled drivers build config 00:02:14.425 net/mlx4: not in enabled drivers build config 00:02:14.425 net/mlx5: not in enabled drivers build config 00:02:14.425 net/mvneta: not in enabled drivers build config 00:02:14.425 net/mvpp2: not in enabled drivers build config 00:02:14.425 net/netvsc: not in enabled drivers build config 00:02:14.425 net/nfb: not in enabled drivers build config 00:02:14.425 net/nfp: not in enabled drivers build config 00:02:14.425 net/ngbe: not in enabled drivers build config 00:02:14.425 net/null: not in enabled drivers build config 00:02:14.425 net/octeontx: not in enabled drivers build config 00:02:14.425 net/octeon_ep: not in enabled drivers build config 00:02:14.425 net/pcap: not in enabled drivers build config 00:02:14.425 net/pfe: not in enabled drivers build config 00:02:14.425 net/qede: not in enabled drivers build config 00:02:14.425 net/ring: not in enabled drivers build config 00:02:14.425 net/sfc: not in enabled drivers build config 00:02:14.425 net/softnic: not in enabled drivers build config 00:02:14.425 net/tap: not in enabled drivers build config 00:02:14.425 net/thunderx: not in enabled drivers build config 00:02:14.425 net/txgbe: not in enabled drivers build config 00:02:14.425 net/vdev_netvsc: not in enabled drivers build config 00:02:14.425 net/vhost: not in enabled drivers build config 00:02:14.425 net/virtio: not in enabled drivers build config 00:02:14.425 net/vmxnet3: not in enabled drivers build config 00:02:14.425 raw/cnxk_bphy: not in enabled drivers build config 00:02:14.425 raw/cnxk_gpio: not in enabled drivers build config 00:02:14.425 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:14.425 raw/ifpga: not in enabled drivers build config 00:02:14.425 raw/ntb: not in enabled drivers build config 00:02:14.425 raw/skeleton: not in enabled drivers build config 00:02:14.425 crypto/armv8: not in enabled drivers build config 00:02:14.425 crypto/bcmfs: not in enabled drivers build config 00:02:14.425 crypto/caam_jr: not in enabled drivers build config 00:02:14.425 crypto/ccp: not in enabled drivers build config 00:02:14.425 crypto/cnxk: not in enabled drivers build config 00:02:14.425 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.425 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.425 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.425 crypto/mlx5: not in enabled drivers build config 00:02:14.425 crypto/mvsam: not in enabled drivers build config 00:02:14.425 crypto/nitrox: not in enabled drivers build config 00:02:14.425 crypto/null: not in enabled drivers build config 00:02:14.425 crypto/octeontx: not in enabled drivers build config 00:02:14.425 crypto/openssl: not in enabled drivers build config 00:02:14.425 crypto/scheduler: not in enabled drivers build config 00:02:14.425 crypto/uadk: not in enabled drivers build config 00:02:14.425 crypto/virtio: not in enabled drivers build config 00:02:14.425 compress/isal: not in enabled drivers build config 00:02:14.425 compress/mlx5: not in enabled drivers build config 00:02:14.425 compress/octeontx: not in enabled drivers build config 00:02:14.425 compress/zlib: not in enabled drivers build config 00:02:14.425 regex/mlx5: not in enabled drivers build config 00:02:14.425 regex/cn9k: not in enabled drivers build config 00:02:14.425 vdpa/ifc: not in enabled drivers build config 00:02:14.425 vdpa/mlx5: not in enabled drivers build config 00:02:14.425 vdpa/sfc: not in enabled drivers build config 00:02:14.425 event/cnxk: not in enabled drivers build config 00:02:14.425 event/dlb2: not in enabled drivers build config 00:02:14.425 event/dpaa: not in enabled drivers build config 00:02:14.425 event/dpaa2: not in enabled drivers build config 00:02:14.425 event/dsw: not in enabled drivers build config 00:02:14.425 event/opdl: not in enabled drivers build config 00:02:14.425 event/skeleton: not in enabled drivers build config 00:02:14.425 event/sw: not in enabled drivers build config 00:02:14.425 event/octeontx: not in enabled drivers build config 00:02:14.425 baseband/acc: not in enabled drivers build config 00:02:14.425 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:14.425 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:14.425 baseband/la12xx: not in enabled drivers build config 00:02:14.425 baseband/null: not in enabled drivers build config 00:02:14.425 baseband/turbo_sw: not in enabled drivers build config 00:02:14.425 gpu/cuda: not in enabled drivers build config 00:02:14.425 00:02:14.425 00:02:14.425 Build targets in project: 311 00:02:14.425 00:02:14.425 DPDK 22.11.4 00:02:14.425 00:02:14.425 User defined options 00:02:14.425 libdir : lib 00:02:14.425 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:14.425 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:14.425 c_link_args : 00:02:14.425 enable_docs : false 00:02:14.425 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:14.425 enable_kmods : false 00:02:14.425 machine : native 00:02:14.425 tests : false 00:02:14.425 00:02:14.425 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.425 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:14.425 12:07:42 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:14.425 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:14.425 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:14.425 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:14.691 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:14.691 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:14.691 [5/740] Generating lib/rte_ring_mingw with a custom command 00:02:14.691 [6/740] Generating lib/rte_eal_mingw with a custom command 00:02:14.691 [7/740] Generating lib/rte_eal_def with a custom command 00:02:14.691 [8/740] Generating lib/rte_ring_def with a custom command 00:02:14.691 [9/740] Generating lib/rte_mbuf_def with a custom command 00:02:14.691 [10/740] Generating lib/rte_mempool_mingw with a custom command 00:02:14.691 [11/740] Generating lib/rte_rcu_def with a custom command 00:02:14.691 [12/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:14.691 [13/740] Generating lib/rte_rcu_mingw with a custom command 00:02:14.691 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.691 [15/740] Generating lib/rte_mempool_def with a custom command 00:02:14.691 [16/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.691 [17/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.691 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.691 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.691 [20/740] Generating lib/rte_net_mingw with a custom command 00:02:14.691 [21/740] Generating lib/rte_net_def with a custom command 00:02:14.691 [22/740] Generating lib/rte_meter_def with a custom command 00:02:14.691 [23/740] Generating lib/rte_meter_mingw with a custom command 00:02:14.691 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.691 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.691 [26/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:14.692 [27/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.692 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.692 [29/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.692 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.692 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.692 [32/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.692 [33/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.692 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.692 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.692 [36/740] Linking static target lib/librte_kvargs.a 00:02:14.692 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.692 [38/740] Generating lib/rte_ethdev_def with a custom command 00:02:14.692 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.692 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.692 [41/740] Generating lib/rte_pci_def with a custom command 00:02:14.692 [42/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:14.692 [43/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.692 [44/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.692 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.692 [46/740] Generating lib/rte_pci_mingw with a custom command 00:02:14.692 [47/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.692 [48/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.957 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.957 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.957 [51/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.957 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.957 [53/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.957 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.957 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.957 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.957 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.957 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.957 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.957 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.957 [61/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.957 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.957 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.957 [64/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.957 [65/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:14.957 [66/740] Generating lib/rte_cmdline_def with a custom command 00:02:14.957 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.957 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.957 [69/740] Generating lib/rte_metrics_def with a custom command 00:02:14.957 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.957 [71/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.957 [72/740] Generating lib/rte_metrics_mingw with a custom command 00:02:14.957 [73/740] Generating lib/rte_hash_def with a custom command 00:02:14.957 [74/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.957 [75/740] Generating lib/rte_hash_mingw with a custom command 00:02:14.957 [76/740] Generating lib/rte_timer_mingw with a custom command 00:02:14.957 [77/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:14.957 [78/740] Generating lib/rte_timer_def with a custom command 00:02:14.957 [79/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.957 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.957 [81/740] Generating lib/rte_acl_def with a custom command 00:02:14.957 [82/740] Generating lib/rte_acl_mingw with a custom command 00:02:14.957 [83/740] Linking static target lib/librte_pci.a 00:02:14.957 [84/740] Generating lib/rte_bbdev_def with a custom command 00:02:14.957 [85/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.957 [86/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:14.957 [87/740] Generating lib/rte_bitratestats_def with a custom command 00:02:14.957 [88/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.957 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.957 [90/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.957 [91/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.957 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.957 [93/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:14.957 [94/740] Linking static target lib/librte_meter.a 00:02:14.957 [95/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.957 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.957 [97/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.957 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.957 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.957 [100/740] Linking static target lib/librte_ring.a 00:02:14.957 [101/740] Generating lib/rte_bpf_def with a custom command 00:02:14.957 [102/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.957 [103/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.957 [104/740] Generating lib/rte_bpf_mingw with a custom command 00:02:14.957 [105/740] Generating lib/rte_cfgfile_def with a custom command 00:02:14.957 [106/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:14.957 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.957 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.957 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.957 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.957 [111/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:14.957 [112/740] Generating lib/rte_compressdev_def with a custom command 00:02:14.957 [113/740] Generating lib/rte_cryptodev_def with a custom command 00:02:14.957 [114/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:14.957 [115/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.957 [116/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:14.957 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.957 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.222 [119/740] Generating lib/rte_distributor_mingw with a custom command 00:02:15.222 [120/740] Generating lib/rte_distributor_def with a custom command 00:02:15.222 [121/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.222 [122/740] Generating lib/rte_efd_def with a custom command 00:02:15.222 [123/740] Generating lib/rte_efd_mingw with a custom command 00:02:15.222 [124/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.222 [125/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.222 [126/740] Generating lib/rte_eventdev_def with a custom command 00:02:15.222 [127/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:15.222 [128/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.222 [129/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.222 [130/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.222 [131/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:15.222 [132/740] Generating lib/rte_gpudev_def with a custom command 00:02:15.222 [133/740] Linking target lib/librte_kvargs.so.23.0 00:02:15.222 [134/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.222 [135/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.222 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.222 [137/740] Generating lib/rte_gro_def with a custom command 00:02:15.222 [138/740] Generating lib/rte_gro_mingw with a custom command 00:02:15.222 [139/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.485 [140/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.485 [141/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.485 [142/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.485 [143/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.485 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.485 [145/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.485 [146/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:15.485 [147/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.485 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.485 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.486 [150/740] Linking static target lib/librte_cfgfile.a 00:02:15.486 [151/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.486 [152/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.486 [153/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.486 [154/740] Generating lib/rte_gso_mingw with a custom command 00:02:15.486 [155/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.486 [156/740] Generating lib/rte_gso_def with a custom command 00:02:15.486 [157/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.486 [158/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.486 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.486 [160/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.486 [161/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.486 [162/740] Linking static target lib/librte_cmdline.a 00:02:15.486 [163/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.486 [164/740] Generating lib/rte_ip_frag_def with a custom command 00:02:15.486 [165/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:15.486 [166/740] Generating lib/rte_jobstats_def with a custom command 00:02:15.486 [167/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.486 [168/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:15.486 [169/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.486 [170/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.486 [171/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:15.486 [172/740] Generating lib/rte_latencystats_def with a custom command 00:02:15.486 [173/740] Linking static target lib/librte_metrics.a 00:02:15.486 [174/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:15.486 [175/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:15.486 [176/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.486 [177/740] Generating lib/rte_lpm_def with a custom command 00:02:15.486 [178/740] Generating lib/rte_lpm_mingw with a custom command 00:02:15.486 [179/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.486 [180/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.486 [181/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:15.486 [182/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:15.486 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.486 [184/740] Linking static target lib/librte_timer.a 00:02:15.486 [185/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.486 [186/740] Generating lib/rte_member_def with a custom command 00:02:15.486 [187/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:15.486 [188/740] Generating lib/rte_member_mingw with a custom command 00:02:15.486 [189/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:15.760 [190/740] Generating lib/rte_pcapng_def with a custom command 00:02:15.760 [191/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.760 [192/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:15.760 [193/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.760 [194/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.760 [195/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:15.760 [196/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.760 [197/740] Linking static target lib/librte_jobstats.a 00:02:15.760 [198/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.760 [199/740] Linking static target lib/librte_telemetry.a 00:02:15.760 [200/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:15.760 [201/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.760 [202/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.760 [203/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.760 [204/740] Generating lib/rte_power_def with a custom command 00:02:15.760 [205/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.760 [206/740] Generating lib/rte_power_mingw with a custom command 00:02:15.760 [207/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.760 [208/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.760 [209/740] Linking static target lib/librte_net.a 00:02:15.760 [210/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:15.760 [211/740] Linking static target lib/librte_bitratestats.a 00:02:15.760 [212/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.760 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:15.760 [214/740] Generating lib/rte_rawdev_def with a custom command 00:02:15.760 [215/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.760 [216/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:15.760 [217/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:15.760 [218/740] Generating lib/rte_dmadev_def with a custom command 00:02:15.760 [219/740] Generating lib/rte_regexdev_def with a custom command 00:02:15.760 [220/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.760 [221/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:15.760 [222/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.760 [223/740] Generating lib/rte_rib_def with a custom command 00:02:15.760 [224/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.760 [225/740] Generating lib/rte_reorder_mingw with a custom command 00:02:15.760 [226/740] Generating lib/rte_rib_mingw with a custom command 00:02:15.760 [227/740] Generating lib/rte_reorder_def with a custom command 00:02:15.760 [228/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.760 [229/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:15.760 [230/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.760 [231/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:15.760 [232/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:15.760 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.760 [234/740] Generating lib/rte_sched_mingw with a custom command 00:02:15.760 [235/740] Generating lib/rte_sched_def with a custom command 00:02:15.760 [236/740] Generating lib/rte_security_def with a custom command 00:02:16.030 [237/740] Generating lib/rte_security_mingw with a custom command 00:02:16.030 [238/740] Generating lib/rte_stack_mingw with a custom command 00:02:16.030 [239/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:16.030 [240/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:16.030 [241/740] Generating lib/rte_stack_def with a custom command 00:02:16.030 [242/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.030 [243/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:16.030 [244/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:16.030 [245/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:16.030 [246/740] Linking static target lib/librte_mempool.a 00:02:16.030 [247/740] Generating lib/rte_vhost_def with a custom command 00:02:16.030 [248/740] Generating lib/rte_vhost_mingw with a custom command 00:02:16.030 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:16.030 [250/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:16.030 [251/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:16.030 [252/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.030 [253/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:16.030 [254/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:16.030 [255/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:16.030 [256/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:16.030 [257/740] Linking static target lib/librte_stack.a 00:02:16.030 [258/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.030 [259/740] Linking static target lib/librte_compressdev.a 00:02:16.030 [260/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:16.030 [261/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:16.030 [262/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.030 [263/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:16.030 [264/740] Generating lib/rte_ipsec_def with a custom command 00:02:16.030 [265/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:16.030 [266/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.030 [267/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:16.030 [268/740] Generating lib/rte_fib_def with a custom command 00:02:16.030 [269/740] Generating lib/rte_fib_mingw with a custom command 00:02:16.030 [270/740] Linking static target lib/librte_bbdev.a 00:02:16.030 [271/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:16.295 [272/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:16.295 [273/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.295 [274/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.295 [275/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:16.295 [276/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.295 [277/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.295 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:16.295 [279/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.295 [280/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.295 [281/740] Linking static target lib/librte_rcu.a 00:02:16.295 [282/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:16.295 [283/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:16.295 [284/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:16.295 [285/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:16.295 [286/740] Linking static target lib/librte_rawdev.a 00:02:16.295 [287/740] Generating lib/rte_port_def with a custom command 00:02:16.295 [288/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:16.295 [289/740] Generating lib/rte_port_mingw with a custom command 00:02:16.295 [290/740] Linking static target lib/librte_gpudev.a 00:02:16.295 [291/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:16.295 [292/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:16.295 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:16.295 [294/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:16.295 [295/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:16.295 [296/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:16.295 [297/740] Linking static target lib/librte_distributor.a 00:02:16.295 [298/740] Generating lib/rte_pdump_def with a custom command 00:02:16.295 [299/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.295 [300/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.295 [301/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.295 [302/740] Generating lib/rte_pdump_mingw with a custom command 00:02:16.295 [303/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:16.295 [304/740] Linking static target lib/librte_dmadev.a 00:02:16.295 [305/740] Linking static target lib/librte_gro.a 00:02:16.295 [306/740] Linking target lib/librte_telemetry.so.23.0 00:02:16.295 [307/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:16.295 [308/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:16.295 [309/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.558 [310/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:16.558 [311/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.558 [312/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:16.558 [313/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:16.558 [314/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:16.558 [315/740] Linking static target lib/librte_gso.a 00:02:16.558 [316/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:16.558 [317/740] Linking static target lib/librte_eal.a 00:02:16.558 [318/740] Linking static target lib/librte_latencystats.a 00:02:16.558 [319/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:16.558 [320/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:16.558 [321/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.558 [322/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.558 [323/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:16.559 [324/740] Generating lib/rte_table_def with a custom command 00:02:16.559 [325/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.559 [326/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:16.559 [327/740] Linking static target lib/librte_regexdev.a 00:02:16.559 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:16.559 [329/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:16.559 [330/740] Generating lib/rte_table_mingw with a custom command 00:02:16.559 [331/740] Linking static target lib/librte_ip_frag.a 00:02:16.830 [332/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:16.830 [333/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.830 [334/740] Linking static target lib/librte_power.a 00:02:16.830 [335/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:16.830 [336/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:16.830 [337/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:16.830 [338/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.831 [339/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:16.831 [340/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.831 [341/740] Linking static target lib/librte_reorder.a 00:02:16.831 [342/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.831 [343/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:16.831 [344/740] Generating lib/rte_pipeline_def with a custom command 00:02:16.831 [345/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:16.831 [346/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.831 [347/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:16.831 [348/740] Linking static target lib/librte_pcapng.a 00:02:16.831 [349/740] Generating lib/rte_graph_def with a custom command 00:02:16.831 [350/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:16.831 [351/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.831 [352/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.831 [353/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.831 [354/740] Linking static target lib/librte_security.a 00:02:16.831 [355/740] Generating lib/rte_graph_mingw with a custom command 00:02:16.831 [356/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.831 [357/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:16.831 [358/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.831 [359/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:17.098 [360/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:17.098 [361/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:17.098 [362/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:17.098 [363/740] Generating lib/rte_node_def with a custom command 00:02:17.098 [364/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.098 [365/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.098 [366/740] Linking static target lib/librte_mbuf.a 00:02:17.098 [367/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.098 [368/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:17.098 [369/740] Generating lib/rte_node_mingw with a custom command 00:02:17.098 [370/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:17.098 [371/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.098 [372/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:17.098 [373/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.098 [374/740] Linking static target lib/librte_bpf.a 00:02:17.098 [375/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:17.098 [376/740] Linking static target lib/librte_lpm.a 00:02:17.098 [377/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.098 [378/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:17.098 [379/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:17.098 [380/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:17.098 [381/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.395 [382/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:17.395 [383/740] Linking static target lib/librte_rib.a 00:02:17.395 [384/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.395 [385/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:17.395 [386/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:17.395 [387/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.395 [388/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:17.395 [389/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:17.395 [390/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:17.395 [391/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.395 [392/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:17.395 [393/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:17.395 [394/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.395 [395/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:17.395 [396/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:17.395 [397/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:17.395 [398/740] Linking static target lib/librte_efd.a 00:02:17.395 [399/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:17.395 [400/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:17.395 [401/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:17.395 [402/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.395 [403/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:17.395 [404/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.395 [405/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:17.395 [406/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:17.395 [407/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:17.395 [408/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:17.395 [409/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:17.395 [410/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:17.395 [411/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:17.395 [412/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:17.395 [413/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:17.395 [414/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:17.395 [415/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:17.672 [416/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:17.672 [417/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:17.672 [418/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:17.672 [419/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.672 [420/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:17.672 [421/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:17.672 [422/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:17.672 [423/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:17.673 [424/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:17.673 [425/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.673 [426/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:17.673 [427/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:17.673 [428/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.673 [429/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:17.673 [430/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:17.673 [431/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.673 [432/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:17.673 [433/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.673 [434/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:17.673 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:17.673 [436/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.673 [437/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:17.673 [438/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.673 [439/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:17.673 [440/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:17.936 [441/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.936 [442/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:17.936 [443/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:17.936 [444/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.936 [445/740] Linking static target lib/librte_graph.a 00:02:17.936 [446/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.936 [447/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:17.936 [448/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.936 [449/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:17.936 [450/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:17.936 [451/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:17.936 [452/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:17.936 [453/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.936 [454/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.936 [455/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.936 [456/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:17.936 [457/740] Linking static target lib/librte_fib.a 00:02:18.208 [458/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:18.208 [459/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.208 [460/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.208 [461/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:18.208 [462/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:18.208 [463/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:18.208 [464/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.208 [465/740] Linking static target lib/librte_pdump.a 00:02:18.208 [466/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:18.208 [467/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:18.208 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:18.476 [469/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.476 [470/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.476 [471/740] Linking static target drivers/librte_bus_vdev.a 00:02:18.476 [472/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:18.476 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:18.476 [474/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.476 [475/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.476 [476/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:18.476 [477/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:18.476 [478/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:18.476 [479/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:18.476 [480/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:18.476 [481/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:18.476 [482/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:18.476 [483/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:18.476 [484/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.476 [485/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:18.749 [486/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.749 [487/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:18.749 [488/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.749 [489/740] Linking static target drivers/librte_bus_pci.a 00:02:18.749 [490/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:18.749 [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.749 [492/740] Linking static target lib/librte_table.a 00:02:18.749 [493/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:18.749 [494/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:18.749 [495/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.749 [496/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:18.749 [497/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:18.749 [498/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.749 [499/740] Linking static target lib/librte_cryptodev.a 00:02:18.749 [500/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:18.749 [501/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:18.749 [502/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:18.749 [503/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:18.749 [504/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.749 [505/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:19.016 [506/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:19.016 [507/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.016 [508/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:19.016 [509/740] Linking static target lib/librte_ethdev.a 00:02:19.016 [510/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:19.016 [511/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:19.016 [512/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:19.016 [513/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:19.016 [514/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:19.016 [515/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:19.016 [516/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:19.016 [517/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:19.016 [518/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:19.016 [519/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:19.016 [520/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:19.016 [521/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:19.016 [522/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:19.016 [523/740] Linking static target lib/librte_sched.a 00:02:19.016 [524/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:19.016 [525/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:19.016 [526/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:19.016 [527/740] Linking static target lib/librte_node.a 00:02:19.016 [528/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:19.016 [529/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:19.279 [530/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:19.279 [531/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:19.279 [532/740] Linking static target lib/librte_member.a 00:02:19.279 [533/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:19.279 [534/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:19.279 [535/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:19.279 [536/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:19.279 [537/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:19.279 [538/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:19.279 [539/740] Linking static target lib/librte_ipsec.a 00:02:19.279 [540/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.279 [541/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.279 [542/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:19.279 [543/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:19.279 [544/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:19.279 [545/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:19.279 [546/740] Linking static target lib/librte_port.a 00:02:19.541 [547/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:19.541 [548/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.541 [549/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.541 [550/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:19.541 [551/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:19.541 [552/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.541 [553/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.541 [554/740] Linking static target drivers/librte_mempool_ring.a 00:02:19.541 [555/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:19.541 [556/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:19.541 [557/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:19.541 [558/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:19.541 [559/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:19.541 [560/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:19.541 [561/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:19.541 [562/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:19.541 [563/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.541 [564/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.541 [565/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:19.541 [566/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.541 [567/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:19.800 [568/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:19.800 [569/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:19.800 [570/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.800 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:19.800 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:19.800 [573/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:19.800 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:19.800 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:19.800 [576/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:19.800 [577/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:19.800 [578/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:19.800 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:19.800 [580/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:19.800 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:19.800 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:19.800 [583/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:19.800 [584/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:19.800 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:19.800 [586/740] Linking static target lib/librte_eventdev.a 00:02:19.801 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:19.801 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:20.061 [589/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:20.061 [590/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:20.061 [591/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:20.061 [592/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:20.061 [593/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:20.061 [594/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:20.061 [595/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:20.061 [596/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:20.061 [597/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:20.061 [598/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:20.061 [599/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:20.061 [600/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:20.061 [601/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:20.061 [602/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:20.061 [603/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.061 [604/740] Linking static target lib/librte_hash.a 00:02:20.320 [605/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.320 [606/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:20.320 [607/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:20.320 [608/740] Linking static target lib/librte_acl.a 00:02:20.320 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:20.320 [610/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:20.579 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:20.579 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:20.579 [613/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:20.839 [614/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.098 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:21.098 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:21.098 [617/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.358 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:21.929 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:21.929 [620/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.929 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:21.929 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:22.498 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:22.498 [624/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:22.757 [625/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.757 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:22.757 [627/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.757 [628/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:23.016 [629/740] Linking static target drivers/librte_net_i40e.a 00:02:23.275 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.843 [631/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:23.843 [632/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.843 [633/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:26.380 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.288 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.288 [636/740] Linking target lib/librte_eal.so.23.0 00:02:28.288 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:28.288 [638/740] Linking target lib/librte_meter.so.23.0 00:02:28.546 [639/740] Linking target lib/librte_ring.so.23.0 00:02:28.546 [640/740] Linking target lib/librte_jobstats.so.23.0 00:02:28.546 [641/740] Linking target lib/librte_pci.so.23.0 00:02:28.546 [642/740] Linking target lib/librte_timer.so.23.0 00:02:28.546 [643/740] Linking target lib/librte_cfgfile.so.23.0 00:02:28.546 [644/740] Linking target lib/librte_dmadev.so.23.0 00:02:28.546 [645/740] Linking target lib/librte_stack.so.23.0 00:02:28.546 [646/740] Linking target lib/librte_rawdev.so.23.0 00:02:28.546 [647/740] Linking target lib/librte_graph.so.23.0 00:02:28.546 [648/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:28.546 [649/740] Linking target lib/librte_acl.so.23.0 00:02:28.546 [650/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:28.546 [651/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:28.546 [652/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:28.546 [653/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:28.546 [654/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:28.546 [655/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:28.546 [656/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:28.546 [657/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:28.546 [658/740] Linking target lib/librte_mempool.so.23.0 00:02:28.546 [659/740] Linking target lib/librte_rcu.so.23.0 00:02:28.546 [660/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:28.805 [661/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:28.805 [662/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:28.805 [663/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:28.805 [664/740] Linking target lib/librte_rib.so.23.0 00:02:28.805 [665/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:28.805 [666/740] Linking target lib/librte_mbuf.so.23.0 00:02:28.805 [667/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:28.805 [668/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:28.805 [669/740] Linking target lib/librte_fib.so.23.0 00:02:28.805 [670/740] Linking target lib/librte_bbdev.so.23.0 00:02:28.805 [671/740] Linking target lib/librte_regexdev.so.23.0 00:02:29.064 [672/740] Linking target lib/librte_compressdev.so.23.0 00:02:29.064 [673/740] Linking target lib/librte_net.so.23.0 00:02:29.064 [674/740] Linking target lib/librte_distributor.so.23.0 00:02:29.064 [675/740] Linking target lib/librte_sched.so.23.0 00:02:29.064 [676/740] Linking target lib/librte_reorder.so.23.0 00:02:29.064 [677/740] Linking target lib/librte_gpudev.so.23.0 00:02:29.064 [678/740] Linking target lib/librte_cryptodev.so.23.0 00:02:29.064 [679/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:29.064 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:29.064 [681/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:29.064 [682/740] Linking target lib/librte_hash.so.23.0 00:02:29.064 [683/740] Linking target lib/librte_cmdline.so.23.0 00:02:29.064 [684/740] Linking target lib/librte_security.so.23.0 00:02:29.064 [685/740] Linking target lib/librte_ethdev.so.23.0 00:02:29.323 [686/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:29.323 [687/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:29.323 [688/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:29.323 [689/740] Linking target lib/librte_efd.so.23.0 00:02:29.323 [690/740] Linking target lib/librte_lpm.so.23.0 00:02:29.323 [691/740] Linking target lib/librte_member.so.23.0 00:02:29.323 [692/740] Linking target lib/librte_metrics.so.23.0 00:02:29.323 [693/740] Linking target lib/librte_pcapng.so.23.0 00:02:29.323 [694/740] Linking target lib/librte_gro.so.23.0 00:02:29.323 [695/740] Linking target lib/librte_ip_frag.so.23.0 00:02:29.323 [696/740] Linking target lib/librte_gso.so.23.0 00:02:29.323 [697/740] Linking target lib/librte_ipsec.so.23.0 00:02:29.323 [698/740] Linking target lib/librte_power.so.23.0 00:02:29.323 [699/740] Linking target lib/librte_eventdev.so.23.0 00:02:29.323 [700/740] Linking target lib/librte_bpf.so.23.0 00:02:29.323 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:29.323 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:29.323 [703/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:29.323 [704/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:29.323 [705/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:29.323 [706/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:29.323 [707/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:29.583 [708/740] Linking target lib/librte_node.so.23.0 00:02:29.583 [709/740] Linking target lib/librte_bitratestats.so.23.0 00:02:29.583 [710/740] Linking target lib/librte_latencystats.so.23.0 00:02:29.583 [711/740] Linking target lib/librte_pdump.so.23.0 00:02:29.583 [712/740] Linking target lib/librte_port.so.23.0 00:02:29.583 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:29.583 [714/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.583 [715/740] Linking target lib/librte_table.so.23.0 00:02:29.842 [716/740] Linking static target lib/librte_vhost.a 00:02:29.842 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:30.101 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:30.101 [719/740] Linking static target lib/librte_pipeline.a 00:02:30.669 [720/740] Linking target app/dpdk-dumpcap 00:02:30.669 [721/740] Linking target app/dpdk-proc-info 00:02:30.669 [722/740] Linking target app/dpdk-test-cmdline 00:02:30.669 [723/740] Linking target app/dpdk-test-compress-perf 00:02:30.669 [724/740] Linking target app/dpdk-test-bbdev 00:02:30.669 [725/740] Linking target app/dpdk-test-pipeline 00:02:30.669 [726/740] Linking target app/dpdk-test-flow-perf 00:02:30.669 [727/740] Linking target app/dpdk-test-eventdev 00:02:30.669 [728/740] Linking target app/dpdk-testpmd 00:02:30.669 [729/740] Linking target app/dpdk-test-sad 00:02:30.669 [730/740] Linking target app/dpdk-test-regex 00:02:30.669 [731/740] Linking target app/dpdk-test-fib 00:02:30.669 [732/740] Linking target app/dpdk-test-acl 00:02:30.669 [733/740] Linking target app/dpdk-test-crypto-perf 00:02:30.669 [734/740] Linking target app/dpdk-pdump 00:02:30.669 [735/740] Linking target app/dpdk-test-security-perf 00:02:30.669 [736/740] Linking target app/dpdk-test-gpudev 00:02:31.608 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.608 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:34.908 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.908 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:34.908 12:08:02 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:34.908 12:08:02 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:34.908 12:08:02 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:34.908 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:34.908 [0/1] Installing files. 00:02:34.908 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.908 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:34.909 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:34.910 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.911 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:34.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:34.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:34.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:34.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:34.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:34.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.176 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.176 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.176 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.177 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.177 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.177 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.177 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:35.177 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.179 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:35.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:35.443 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:35.443 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:35.443 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:35.443 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:35.443 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:35.443 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:35.443 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:35.443 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:35.443 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:35.443 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:35.444 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:35.444 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:35.444 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:35.444 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:35.444 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:35.444 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:35.444 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:35.444 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:35.444 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:35.444 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:35.444 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:35.444 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:35.444 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:35.444 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:35.444 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:35.444 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:35.444 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:35.444 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:35.444 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:35.444 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:35.444 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:35.444 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:35.444 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:35.444 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:35.444 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:35.444 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:35.444 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:35.444 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:35.444 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:35.444 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:35.444 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:35.444 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:35.444 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:35.444 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:35.444 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:35.444 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:35.444 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:35.444 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:35.444 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:35.444 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:35.444 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:35.444 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:35.444 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:35.444 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:35.444 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:35.444 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:35.444 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:35.444 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:35.444 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:35.444 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:35.444 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:35.444 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:35.444 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:35.444 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:35.444 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:35.444 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:35.444 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:35.444 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:35.444 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:35.444 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:35.444 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:35.444 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:35.444 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:35.444 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:35.444 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:35.444 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:35.444 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:35.444 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:35.444 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:35.444 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:35.444 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:35.444 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:35.444 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:35.444 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:35.444 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:35.444 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:35.444 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:35.444 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:35.444 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:35.444 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:35.444 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:35.444 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:35.444 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:35.444 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:35.444 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:35.444 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:35.444 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:35.444 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:35.444 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:35.444 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:35.444 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:35.445 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:35.445 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:35.445 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:35.445 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:35.445 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:35.445 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:35.445 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:35.445 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:35.445 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:35.445 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:35.445 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:35.445 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:35.445 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:35.445 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:35.445 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:35.445 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:35.445 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:35.445 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:35.445 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:35.445 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:35.445 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:35.445 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:35.445 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:35.445 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:35.445 12:08:02 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:35.445 12:08:02 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.445 00:02:35.445 real 0m28.215s 00:02:35.445 user 7m42.420s 00:02:35.445 sys 1m59.809s 00:02:35.445 12:08:02 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:35.445 12:08:02 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:35.445 ************************************ 00:02:35.445 END TEST build_native_dpdk 00:02:35.445 ************************************ 00:02:35.445 12:08:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:35.445 12:08:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:35.445 12:08:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:35.445 12:08:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:35.445 12:08:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:35.445 12:08:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:35.445 12:08:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:35.445 12:08:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:35.704 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:35.704 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.704 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:35.964 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:36.224 Using 'verbs' RDMA provider 00:02:49.382 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:01.598 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:01.598 Creating mk/config.mk...done. 00:03:01.598 Creating mk/cc.flags.mk...done. 00:03:01.598 Type 'make' to build. 00:03:01.598 12:08:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:01.598 12:08:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:01.598 12:08:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:01.598 12:08:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.598 ************************************ 00:03:01.598 START TEST make 00:03:01.598 ************************************ 00:03:01.598 12:08:29 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:03.542 The Meson build system 00:03:03.542 Version: 1.5.0 00:03:03.542 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:03.542 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.542 Build type: native build 00:03:03.542 Project name: libvfio-user 00:03:03.542 Project version: 0.0.1 00:03:03.542 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:03.542 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:03.542 Host machine cpu family: x86_64 00:03:03.542 Host machine cpu: x86_64 00:03:03.542 Run-time dependency threads found: YES 00:03:03.543 Library dl found: YES 00:03:03.543 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:03.543 Run-time dependency json-c found: YES 0.17 00:03:03.543 Run-time dependency cmocka found: YES 1.1.7 00:03:03.543 Program pytest-3 found: NO 00:03:03.543 Program flake8 found: NO 00:03:03.543 Program misspell-fixer found: NO 00:03:03.543 Program restructuredtext-lint found: NO 00:03:03.543 Program valgrind found: YES (/usr/bin/valgrind) 00:03:03.543 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.543 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.543 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.543 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:03.543 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:03.543 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:03.543 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:03.543 Build targets in project: 8 00:03:03.543 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:03.543 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:03.543 00:03:03.543 libvfio-user 0.0.1 00:03:03.543 00:03:03.543 User defined options 00:03:03.543 buildtype : debug 00:03:03.543 default_library: shared 00:03:03.543 libdir : /usr/local/lib 00:03:03.543 00:03:03.543 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.110 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:04.368 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:04.368 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:04.368 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:04.368 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:04.368 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:04.368 [6/37] Compiling C object samples/null.p/null.c.o 00:03:04.368 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:04.368 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:04.368 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:04.368 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:04.368 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:04.368 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:04.368 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:04.368 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:04.368 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:04.368 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:04.368 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:04.368 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:04.368 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:04.368 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:04.368 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:04.368 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:04.368 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:04.368 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:04.368 [25/37] Compiling C object samples/server.p/server.c.o 00:03:04.368 [26/37] Compiling C object samples/client.p/client.c.o 00:03:04.368 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:04.368 [28/37] Linking target samples/client 00:03:04.368 [29/37] Linking target test/unit_tests 00:03:04.368 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:04.627 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:04.627 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:04.627 [33/37] Linking target samples/null 00:03:04.627 [34/37] Linking target samples/server 00:03:04.627 [35/37] Linking target samples/lspci 00:03:04.627 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:04.627 [37/37] Linking target samples/gpio-pci-idio-16 00:03:04.627 INFO: autodetecting backend as ninja 00:03:04.627 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:04.886 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.144 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:05.144 ninja: no work to do. 00:03:37.229 CC lib/ut_mock/mock.o 00:03:37.229 CC lib/log/log.o 00:03:37.229 CC lib/log/log_flags.o 00:03:37.229 CC lib/ut/ut.o 00:03:37.229 CC lib/log/log_deprecated.o 00:03:37.229 LIB libspdk_log.a 00:03:37.229 LIB libspdk_ut_mock.a 00:03:37.229 LIB libspdk_ut.a 00:03:37.229 SO libspdk_ut_mock.so.6.0 00:03:37.229 SO libspdk_log.so.7.1 00:03:37.229 SO libspdk_ut.so.2.0 00:03:37.229 SYMLINK libspdk_ut_mock.so 00:03:37.229 SYMLINK libspdk_log.so 00:03:37.229 SYMLINK libspdk_ut.so 00:03:37.229 CC lib/util/base64.o 00:03:37.229 CXX lib/trace_parser/trace.o 00:03:37.229 CC lib/util/bit_array.o 00:03:37.229 CC lib/util/cpuset.o 00:03:37.229 CC lib/util/crc16.o 00:03:37.229 CC lib/util/crc32.o 00:03:37.229 CC lib/util/crc32c.o 00:03:37.229 CC lib/util/crc32_ieee.o 00:03:37.229 CC lib/util/crc64.o 00:03:37.229 CC lib/util/dif.o 00:03:37.229 CC lib/ioat/ioat.o 00:03:37.229 CC lib/util/fd.o 00:03:37.229 CC lib/util/fd_group.o 00:03:37.229 CC lib/util/file.o 00:03:37.229 CC lib/util/hexlify.o 00:03:37.229 CC lib/util/iov.o 00:03:37.229 CC lib/dma/dma.o 00:03:37.229 CC lib/util/math.o 00:03:37.229 CC lib/util/net.o 00:03:37.229 CC lib/util/pipe.o 00:03:37.229 CC lib/util/strerror_tls.o 00:03:37.229 CC lib/util/string.o 00:03:37.229 CC lib/util/xor.o 00:03:37.229 CC lib/util/uuid.o 00:03:37.229 CC lib/util/zipf.o 00:03:37.229 CC lib/util/md5.o 00:03:37.229 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.229 CC lib/vfio_user/host/vfio_user.o 00:03:37.229 LIB libspdk_dma.a 00:03:37.229 SO libspdk_dma.so.5.0 00:03:37.229 LIB libspdk_ioat.a 00:03:37.229 SO libspdk_ioat.so.7.0 00:03:37.229 SYMLINK libspdk_dma.so 00:03:37.229 SYMLINK libspdk_ioat.so 00:03:37.229 LIB libspdk_vfio_user.a 00:03:37.229 SO libspdk_vfio_user.so.5.0 00:03:37.229 LIB libspdk_util.a 00:03:37.229 SYMLINK libspdk_vfio_user.so 00:03:37.229 SO libspdk_util.so.10.1 00:03:37.229 SYMLINK libspdk_util.so 00:03:37.229 CC lib/rdma_utils/rdma_utils.o 00:03:37.229 CC lib/json/json_parse.o 00:03:37.229 CC lib/json/json_util.o 00:03:37.229 CC lib/json/json_write.o 00:03:37.229 CC lib/idxd/idxd.o 00:03:37.229 CC lib/idxd/idxd_user.o 00:03:37.229 CC lib/env_dpdk/env.o 00:03:37.229 CC lib/idxd/idxd_kernel.o 00:03:37.229 CC lib/env_dpdk/memory.o 00:03:37.229 CC lib/env_dpdk/pci.o 00:03:37.229 CC lib/conf/conf.o 00:03:37.229 CC lib/env_dpdk/init.o 00:03:37.229 CC lib/vmd/vmd.o 00:03:37.229 CC lib/env_dpdk/threads.o 00:03:37.229 CC lib/vmd/led.o 00:03:37.229 CC lib/env_dpdk/pci_ioat.o 00:03:37.229 CC lib/env_dpdk/pci_virtio.o 00:03:37.229 CC lib/env_dpdk/pci_vmd.o 00:03:37.229 CC lib/env_dpdk/pci_idxd.o 00:03:37.229 CC lib/env_dpdk/pci_event.o 00:03:37.229 CC lib/env_dpdk/sigbus_handler.o 00:03:37.229 CC lib/env_dpdk/pci_dpdk.o 00:03:37.229 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:37.229 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.229 LIB libspdk_conf.a 00:03:37.229 LIB libspdk_rdma_utils.a 00:03:37.229 SO libspdk_conf.so.6.0 00:03:37.229 LIB libspdk_json.a 00:03:37.229 SO libspdk_rdma_utils.so.1.0 00:03:37.229 SO libspdk_json.so.6.0 00:03:37.229 SYMLINK libspdk_conf.so 00:03:37.229 SYMLINK libspdk_rdma_utils.so 00:03:37.229 SYMLINK libspdk_json.so 00:03:37.229 LIB libspdk_idxd.a 00:03:37.229 SO libspdk_idxd.so.12.1 00:03:37.229 LIB libspdk_vmd.a 00:03:37.229 SYMLINK libspdk_idxd.so 00:03:37.229 SO libspdk_vmd.so.6.0 00:03:37.229 SYMLINK libspdk_vmd.so 00:03:37.229 LIB libspdk_trace_parser.a 00:03:37.229 CC lib/rdma_provider/common.o 00:03:37.229 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:37.229 SO libspdk_trace_parser.so.6.0 00:03:37.229 CC lib/jsonrpc/jsonrpc_server.o 00:03:37.229 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:37.229 CC lib/jsonrpc/jsonrpc_client.o 00:03:37.229 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:37.229 SYMLINK libspdk_trace_parser.so 00:03:37.229 LIB libspdk_rdma_provider.a 00:03:37.229 SO libspdk_rdma_provider.so.7.0 00:03:37.229 LIB libspdk_jsonrpc.a 00:03:37.229 SYMLINK libspdk_rdma_provider.so 00:03:37.229 SO libspdk_jsonrpc.so.6.0 00:03:37.229 SYMLINK libspdk_jsonrpc.so 00:03:37.229 LIB libspdk_env_dpdk.a 00:03:37.229 SO libspdk_env_dpdk.so.15.1 00:03:37.229 SYMLINK libspdk_env_dpdk.so 00:03:37.229 CC lib/rpc/rpc.o 00:03:37.229 LIB libspdk_rpc.a 00:03:37.229 SO libspdk_rpc.so.6.0 00:03:37.229 SYMLINK libspdk_rpc.so 00:03:37.229 CC lib/notify/notify.o 00:03:37.229 CC lib/notify/notify_rpc.o 00:03:37.229 CC lib/keyring/keyring.o 00:03:37.229 CC lib/keyring/keyring_rpc.o 00:03:37.229 CC lib/trace/trace.o 00:03:37.229 CC lib/trace/trace_flags.o 00:03:37.229 CC lib/trace/trace_rpc.o 00:03:37.229 LIB libspdk_notify.a 00:03:37.229 SO libspdk_notify.so.6.0 00:03:37.229 LIB libspdk_keyring.a 00:03:37.229 LIB libspdk_trace.a 00:03:37.229 SO libspdk_keyring.so.2.0 00:03:37.229 SYMLINK libspdk_notify.so 00:03:37.229 SO libspdk_trace.so.11.0 00:03:37.229 SYMLINK libspdk_keyring.so 00:03:37.229 SYMLINK libspdk_trace.so 00:03:37.229 CC lib/sock/sock.o 00:03:37.229 CC lib/sock/sock_rpc.o 00:03:37.229 CC lib/thread/thread.o 00:03:37.229 CC lib/thread/iobuf.o 00:03:37.229 LIB libspdk_sock.a 00:03:37.229 SO libspdk_sock.so.10.0 00:03:37.229 SYMLINK libspdk_sock.so 00:03:37.229 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.229 CC lib/nvme/nvme_ctrlr.o 00:03:37.229 CC lib/nvme/nvme_fabric.o 00:03:37.229 CC lib/nvme/nvme_ns_cmd.o 00:03:37.229 CC lib/nvme/nvme_ns.o 00:03:37.229 CC lib/nvme/nvme_pcie_common.o 00:03:37.229 CC lib/nvme/nvme_pcie.o 00:03:37.229 CC lib/nvme/nvme_qpair.o 00:03:37.230 CC lib/nvme/nvme.o 00:03:37.230 CC lib/nvme/nvme_quirks.o 00:03:37.230 CC lib/nvme/nvme_transport.o 00:03:37.230 CC lib/nvme/nvme_discovery.o 00:03:37.230 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.230 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.230 CC lib/nvme/nvme_tcp.o 00:03:37.230 CC lib/nvme/nvme_opal.o 00:03:37.230 CC lib/nvme/nvme_io_msg.o 00:03:37.230 CC lib/nvme/nvme_poll_group.o 00:03:37.230 CC lib/nvme/nvme_zns.o 00:03:37.230 CC lib/nvme/nvme_stubs.o 00:03:37.230 CC lib/nvme/nvme_auth.o 00:03:37.230 CC lib/nvme/nvme_cuse.o 00:03:37.230 CC lib/nvme/nvme_vfio_user.o 00:03:37.230 CC lib/nvme/nvme_rdma.o 00:03:37.230 LIB libspdk_thread.a 00:03:37.230 SO libspdk_thread.so.11.0 00:03:37.489 SYMLINK libspdk_thread.so 00:03:37.748 CC lib/vfu_tgt/tgt_endpoint.o 00:03:37.748 CC lib/vfu_tgt/tgt_rpc.o 00:03:37.748 CC lib/blob/blobstore.o 00:03:37.748 CC lib/blob/request.o 00:03:37.748 CC lib/blob/zeroes.o 00:03:37.748 CC lib/blob/blob_bs_dev.o 00:03:37.748 CC lib/accel/accel.o 00:03:37.748 CC lib/accel/accel_rpc.o 00:03:37.748 CC lib/accel/accel_sw.o 00:03:37.748 CC lib/virtio/virtio.o 00:03:37.748 CC lib/fsdev/fsdev.o 00:03:37.748 CC lib/fsdev/fsdev_io.o 00:03:37.748 CC lib/virtio/virtio_vhost_user.o 00:03:37.748 CC lib/virtio/virtio_vfio_user.o 00:03:37.748 CC lib/fsdev/fsdev_rpc.o 00:03:37.748 CC lib/virtio/virtio_pci.o 00:03:37.748 CC lib/init/json_config.o 00:03:37.748 CC lib/init/subsystem.o 00:03:37.748 CC lib/init/subsystem_rpc.o 00:03:37.748 CC lib/init/rpc.o 00:03:38.009 LIB libspdk_init.a 00:03:38.009 SO libspdk_init.so.6.0 00:03:38.009 LIB libspdk_vfu_tgt.a 00:03:38.009 SO libspdk_vfu_tgt.so.3.0 00:03:38.009 LIB libspdk_virtio.a 00:03:38.009 SYMLINK libspdk_init.so 00:03:38.009 SO libspdk_virtio.so.7.0 00:03:38.268 SYMLINK libspdk_vfu_tgt.so 00:03:38.268 SYMLINK libspdk_virtio.so 00:03:38.269 LIB libspdk_fsdev.a 00:03:38.269 SO libspdk_fsdev.so.2.0 00:03:38.529 SYMLINK libspdk_fsdev.so 00:03:38.529 CC lib/event/app.o 00:03:38.529 CC lib/event/reactor.o 00:03:38.529 CC lib/event/log_rpc.o 00:03:38.529 CC lib/event/app_rpc.o 00:03:38.529 CC lib/event/scheduler_static.o 00:03:38.529 LIB libspdk_accel.a 00:03:38.529 SO libspdk_accel.so.16.0 00:03:38.788 SYMLINK libspdk_accel.so 00:03:38.788 LIB libspdk_nvme.a 00:03:38.788 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:38.788 LIB libspdk_event.a 00:03:38.788 SO libspdk_event.so.14.0 00:03:38.788 SO libspdk_nvme.so.15.0 00:03:38.788 SYMLINK libspdk_event.so 00:03:39.048 SYMLINK libspdk_nvme.so 00:03:39.048 CC lib/bdev/bdev.o 00:03:39.048 CC lib/bdev/bdev_rpc.o 00:03:39.048 CC lib/bdev/bdev_zone.o 00:03:39.048 CC lib/bdev/part.o 00:03:39.048 CC lib/bdev/scsi_nvme.o 00:03:39.308 LIB libspdk_fuse_dispatcher.a 00:03:39.308 SO libspdk_fuse_dispatcher.so.1.0 00:03:39.308 SYMLINK libspdk_fuse_dispatcher.so 00:03:39.880 LIB libspdk_blob.a 00:03:39.880 SO libspdk_blob.so.12.0 00:03:40.140 SYMLINK libspdk_blob.so 00:03:40.400 CC lib/blobfs/blobfs.o 00:03:40.400 CC lib/blobfs/tree.o 00:03:40.400 CC lib/lvol/lvol.o 00:03:40.969 LIB libspdk_bdev.a 00:03:40.969 SO libspdk_bdev.so.17.0 00:03:40.969 LIB libspdk_blobfs.a 00:03:40.969 SO libspdk_blobfs.so.11.0 00:03:40.969 SYMLINK libspdk_bdev.so 00:03:40.969 LIB libspdk_lvol.a 00:03:40.969 SYMLINK libspdk_blobfs.so 00:03:41.229 SO libspdk_lvol.so.11.0 00:03:41.229 SYMLINK libspdk_lvol.so 00:03:41.492 CC lib/ftl/ftl_core.o 00:03:41.492 CC lib/ftl/ftl_init.o 00:03:41.492 CC lib/ftl/ftl_layout.o 00:03:41.492 CC lib/scsi/dev.o 00:03:41.492 CC lib/ftl/ftl_debug.o 00:03:41.492 CC lib/ftl/ftl_io.o 00:03:41.492 CC lib/scsi/lun.o 00:03:41.492 CC lib/scsi/port.o 00:03:41.492 CC lib/ftl/ftl_sb.o 00:03:41.492 CC lib/scsi/scsi.o 00:03:41.492 CC lib/ftl/ftl_l2p.o 00:03:41.492 CC lib/scsi/scsi_bdev.o 00:03:41.492 CC lib/ftl/ftl_l2p_flat.o 00:03:41.492 CC lib/scsi/scsi_pr.o 00:03:41.492 CC lib/ftl/ftl_nv_cache.o 00:03:41.492 CC lib/nvmf/ctrlr.o 00:03:41.492 CC lib/scsi/scsi_rpc.o 00:03:41.492 CC lib/scsi/task.o 00:03:41.492 CC lib/nvmf/ctrlr_discovery.o 00:03:41.492 CC lib/ftl/ftl_band.o 00:03:41.492 CC lib/ftl/ftl_band_ops.o 00:03:41.492 CC lib/nvmf/ctrlr_bdev.o 00:03:41.492 CC lib/nbd/nbd.o 00:03:41.492 CC lib/ublk/ublk.o 00:03:41.492 CC lib/ftl/ftl_writer.o 00:03:41.492 CC lib/nbd/nbd_rpc.o 00:03:41.492 CC lib/ublk/ublk_rpc.o 00:03:41.492 CC lib/nvmf/nvmf.o 00:03:41.492 CC lib/ftl/ftl_rq.o 00:03:41.492 CC lib/ftl/ftl_reloc.o 00:03:41.492 CC lib/nvmf/subsystem.o 00:03:41.492 CC lib/nvmf/nvmf_rpc.o 00:03:41.492 CC lib/ftl/ftl_l2p_cache.o 00:03:41.492 CC lib/nvmf/transport.o 00:03:41.492 CC lib/nvmf/tcp.o 00:03:41.492 CC lib/ftl/ftl_p2l_log.o 00:03:41.492 CC lib/nvmf/stubs.o 00:03:41.492 CC lib/nvmf/vfio_user.o 00:03:41.492 CC lib/ftl/ftl_p2l.o 00:03:41.492 CC lib/nvmf/rdma.o 00:03:41.492 CC lib/nvmf/mdns_server.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:41.492 CC lib/nvmf/auth.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:41.492 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:41.492 CC lib/ftl/utils/ftl_conf.o 00:03:41.492 CC lib/ftl/utils/ftl_md.o 00:03:41.493 CC lib/ftl/utils/ftl_mempool.o 00:03:41.493 CC lib/ftl/utils/ftl_property.o 00:03:41.493 CC lib/ftl/utils/ftl_bitmap.o 00:03:41.493 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:41.493 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:41.493 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:41.493 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:41.493 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:41.493 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:41.493 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:41.493 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:41.493 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:41.493 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:41.493 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:41.493 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:41.493 CC lib/ftl/base/ftl_base_dev.o 00:03:41.493 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:41.493 CC lib/ftl/base/ftl_base_bdev.o 00:03:41.493 CC lib/ftl/ftl_trace.o 00:03:42.062 LIB libspdk_nbd.a 00:03:42.062 SO libspdk_nbd.so.7.0 00:03:42.062 LIB libspdk_scsi.a 00:03:42.062 LIB libspdk_ublk.a 00:03:42.062 SYMLINK libspdk_nbd.so 00:03:42.062 SO libspdk_scsi.so.9.0 00:03:42.321 SO libspdk_ublk.so.3.0 00:03:42.321 SYMLINK libspdk_scsi.so 00:03:42.321 SYMLINK libspdk_ublk.so 00:03:42.321 LIB libspdk_ftl.a 00:03:42.581 SO libspdk_ftl.so.9.0 00:03:42.581 CC lib/iscsi/init_grp.o 00:03:42.581 CC lib/iscsi/conn.o 00:03:42.581 CC lib/iscsi/iscsi.o 00:03:42.581 CC lib/iscsi/iscsi_rpc.o 00:03:42.581 CC lib/iscsi/param.o 00:03:42.581 CC lib/iscsi/portal_grp.o 00:03:42.581 CC lib/iscsi/tgt_node.o 00:03:42.581 CC lib/iscsi/iscsi_subsystem.o 00:03:42.581 CC lib/iscsi/task.o 00:03:42.581 CC lib/vhost/vhost.o 00:03:42.581 CC lib/vhost/vhost_rpc.o 00:03:42.581 CC lib/vhost/vhost_scsi.o 00:03:42.581 CC lib/vhost/vhost_blk.o 00:03:42.581 CC lib/vhost/rte_vhost_user.o 00:03:42.841 SYMLINK libspdk_ftl.so 00:03:43.409 LIB libspdk_nvmf.a 00:03:43.409 SO libspdk_nvmf.so.20.0 00:03:43.409 LIB libspdk_vhost.a 00:03:43.409 SO libspdk_vhost.so.8.0 00:03:43.669 SYMLINK libspdk_nvmf.so 00:03:43.669 SYMLINK libspdk_vhost.so 00:03:43.669 LIB libspdk_iscsi.a 00:03:43.669 SO libspdk_iscsi.so.8.0 00:03:43.669 SYMLINK libspdk_iscsi.so 00:03:44.270 CC module/env_dpdk/env_dpdk_rpc.o 00:03:44.270 CC module/vfu_device/vfu_virtio_scsi.o 00:03:44.270 CC module/vfu_device/vfu_virtio.o 00:03:44.270 CC module/vfu_device/vfu_virtio_blk.o 00:03:44.270 CC module/vfu_device/vfu_virtio_rpc.o 00:03:44.270 CC module/vfu_device/vfu_virtio_fs.o 00:03:44.549 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:44.549 CC module/fsdev/aio/fsdev_aio.o 00:03:44.549 CC module/fsdev/aio/linux_aio_mgr.o 00:03:44.549 LIB libspdk_env_dpdk_rpc.a 00:03:44.549 CC module/sock/posix/posix.o 00:03:44.549 CC module/keyring/file/keyring_rpc.o 00:03:44.549 CC module/keyring/file/keyring.o 00:03:44.549 CC module/accel/error/accel_error.o 00:03:44.549 CC module/keyring/linux/keyring_rpc.o 00:03:44.549 CC module/accel/error/accel_error_rpc.o 00:03:44.549 CC module/keyring/linux/keyring.o 00:03:44.549 CC module/accel/iaa/accel_iaa.o 00:03:44.549 CC module/accel/iaa/accel_iaa_rpc.o 00:03:44.549 CC module/blob/bdev/blob_bdev.o 00:03:44.549 CC module/accel/dsa/accel_dsa.o 00:03:44.549 CC module/accel/dsa/accel_dsa_rpc.o 00:03:44.549 CC module/scheduler/gscheduler/gscheduler.o 00:03:44.549 CC module/accel/ioat/accel_ioat.o 00:03:44.549 CC module/accel/ioat/accel_ioat_rpc.o 00:03:44.549 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:44.549 SO libspdk_env_dpdk_rpc.so.6.0 00:03:44.549 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:44.549 SYMLINK libspdk_env_dpdk_rpc.so 00:03:44.549 LIB libspdk_keyring_file.a 00:03:44.549 LIB libspdk_keyring_linux.a 00:03:44.549 SO libspdk_keyring_file.so.2.0 00:03:44.549 LIB libspdk_scheduler_gscheduler.a 00:03:44.821 LIB libspdk_scheduler_dpdk_governor.a 00:03:44.821 SO libspdk_keyring_linux.so.1.0 00:03:44.821 LIB libspdk_accel_iaa.a 00:03:44.821 SO libspdk_scheduler_gscheduler.so.4.0 00:03:44.821 LIB libspdk_accel_error.a 00:03:44.821 LIB libspdk_accel_ioat.a 00:03:44.821 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:44.821 LIB libspdk_scheduler_dynamic.a 00:03:44.821 SO libspdk_accel_iaa.so.3.0 00:03:44.821 SYMLINK libspdk_keyring_file.so 00:03:44.821 SO libspdk_accel_ioat.so.6.0 00:03:44.821 SYMLINK libspdk_keyring_linux.so 00:03:44.821 SO libspdk_accel_error.so.2.0 00:03:44.821 SO libspdk_scheduler_dynamic.so.4.0 00:03:44.821 SYMLINK libspdk_scheduler_gscheduler.so 00:03:44.821 LIB libspdk_blob_bdev.a 00:03:44.821 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:44.821 LIB libspdk_accel_dsa.a 00:03:44.821 SYMLINK libspdk_accel_iaa.so 00:03:44.821 SYMLINK libspdk_scheduler_dynamic.so 00:03:44.821 SYMLINK libspdk_accel_ioat.so 00:03:44.821 SO libspdk_blob_bdev.so.12.0 00:03:44.821 SYMLINK libspdk_accel_error.so 00:03:44.821 SO libspdk_accel_dsa.so.5.0 00:03:44.821 SYMLINK libspdk_blob_bdev.so 00:03:44.821 LIB libspdk_vfu_device.a 00:03:44.821 SYMLINK libspdk_accel_dsa.so 00:03:44.821 SO libspdk_vfu_device.so.3.0 00:03:45.109 SYMLINK libspdk_vfu_device.so 00:03:45.109 LIB libspdk_fsdev_aio.a 00:03:45.109 SO libspdk_fsdev_aio.so.1.0 00:03:45.109 LIB libspdk_sock_posix.a 00:03:45.109 SYMLINK libspdk_fsdev_aio.so 00:03:45.109 SO libspdk_sock_posix.so.6.0 00:03:45.398 SYMLINK libspdk_sock_posix.so 00:03:45.398 CC module/blobfs/bdev/blobfs_bdev.o 00:03:45.398 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:45.398 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:45.398 CC module/bdev/delay/vbdev_delay.o 00:03:45.398 CC module/bdev/error/vbdev_error.o 00:03:45.398 CC module/bdev/error/vbdev_error_rpc.o 00:03:45.398 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:45.398 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:45.398 CC module/bdev/lvol/vbdev_lvol.o 00:03:45.398 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:45.398 CC module/bdev/split/vbdev_split.o 00:03:45.398 CC module/bdev/malloc/bdev_malloc.o 00:03:45.398 CC module/bdev/split/vbdev_split_rpc.o 00:03:45.398 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:45.398 CC module/bdev/gpt/gpt.o 00:03:45.398 CC module/bdev/raid/bdev_raid.o 00:03:45.398 CC module/bdev/raid/bdev_raid_rpc.o 00:03:45.398 CC module/bdev/gpt/vbdev_gpt.o 00:03:45.398 CC module/bdev/raid/bdev_raid_sb.o 00:03:45.398 CC module/bdev/raid/raid0.o 00:03:45.398 CC module/bdev/raid/raid1.o 00:03:45.398 CC module/bdev/raid/concat.o 00:03:45.398 CC module/bdev/null/bdev_null.o 00:03:45.398 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:45.398 CC module/bdev/iscsi/bdev_iscsi.o 00:03:45.398 CC module/bdev/null/bdev_null_rpc.o 00:03:45.398 CC module/bdev/nvme/bdev_nvme.o 00:03:45.398 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:45.398 CC module/bdev/nvme/bdev_mdns_client.o 00:03:45.398 CC module/bdev/nvme/nvme_rpc.o 00:03:45.398 CC module/bdev/nvme/vbdev_opal.o 00:03:45.398 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:45.398 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:45.398 CC module/bdev/ftl/bdev_ftl.o 00:03:45.398 CC module/bdev/aio/bdev_aio.o 00:03:45.398 CC module/bdev/aio/bdev_aio_rpc.o 00:03:45.398 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:45.398 CC module/bdev/passthru/vbdev_passthru.o 00:03:45.398 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:45.398 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:45.398 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:45.398 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:45.714 LIB libspdk_blobfs_bdev.a 00:03:45.714 LIB libspdk_bdev_error.a 00:03:45.714 SO libspdk_blobfs_bdev.so.6.0 00:03:45.714 LIB libspdk_bdev_null.a 00:03:45.714 LIB libspdk_bdev_split.a 00:03:45.714 SO libspdk_bdev_error.so.6.0 00:03:45.714 SO libspdk_bdev_null.so.6.0 00:03:45.714 LIB libspdk_bdev_ftl.a 00:03:45.714 LIB libspdk_bdev_zone_block.a 00:03:45.714 SO libspdk_bdev_split.so.6.0 00:03:45.714 SYMLINK libspdk_blobfs_bdev.so 00:03:45.714 SO libspdk_bdev_ftl.so.6.0 00:03:45.714 SYMLINK libspdk_bdev_error.so 00:03:45.714 LIB libspdk_bdev_delay.a 00:03:45.714 LIB libspdk_bdev_gpt.a 00:03:45.714 SO libspdk_bdev_zone_block.so.6.0 00:03:45.714 SYMLINK libspdk_bdev_null.so 00:03:45.996 LIB libspdk_bdev_iscsi.a 00:03:45.996 LIB libspdk_bdev_passthru.a 00:03:45.996 SO libspdk_bdev_gpt.so.6.0 00:03:45.996 SO libspdk_bdev_delay.so.6.0 00:03:45.996 SYMLINK libspdk_bdev_split.so 00:03:45.996 LIB libspdk_bdev_aio.a 00:03:45.996 SO libspdk_bdev_iscsi.so.6.0 00:03:45.996 SO libspdk_bdev_passthru.so.6.0 00:03:45.996 SYMLINK libspdk_bdev_ftl.so 00:03:45.996 SYMLINK libspdk_bdev_zone_block.so 00:03:45.996 LIB libspdk_bdev_malloc.a 00:03:45.996 SYMLINK libspdk_bdev_gpt.so 00:03:45.996 SO libspdk_bdev_aio.so.6.0 00:03:45.996 SYMLINK libspdk_bdev_delay.so 00:03:45.996 SO libspdk_bdev_malloc.so.6.0 00:03:45.996 SYMLINK libspdk_bdev_passthru.so 00:03:45.996 SYMLINK libspdk_bdev_iscsi.so 00:03:45.996 LIB libspdk_bdev_virtio.a 00:03:45.996 SYMLINK libspdk_bdev_aio.so 00:03:45.996 LIB libspdk_bdev_lvol.a 00:03:45.996 SYMLINK libspdk_bdev_malloc.so 00:03:45.996 SO libspdk_bdev_virtio.so.6.0 00:03:45.996 SO libspdk_bdev_lvol.so.6.0 00:03:45.996 SYMLINK libspdk_bdev_virtio.so 00:03:45.996 SYMLINK libspdk_bdev_lvol.so 00:03:46.256 LIB libspdk_bdev_raid.a 00:03:46.256 SO libspdk_bdev_raid.so.6.0 00:03:46.516 SYMLINK libspdk_bdev_raid.so 00:03:47.456 LIB libspdk_bdev_nvme.a 00:03:47.456 SO libspdk_bdev_nvme.so.7.1 00:03:47.456 SYMLINK libspdk_bdev_nvme.so 00:03:48.026 CC module/event/subsystems/iobuf/iobuf.o 00:03:48.026 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:48.026 CC module/event/subsystems/vmd/vmd.o 00:03:48.026 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:48.026 CC module/event/subsystems/scheduler/scheduler.o 00:03:48.026 CC module/event/subsystems/keyring/keyring.o 00:03:48.026 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:48.026 CC module/event/subsystems/sock/sock.o 00:03:48.026 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:48.027 CC module/event/subsystems/fsdev/fsdev.o 00:03:48.286 LIB libspdk_event_keyring.a 00:03:48.286 LIB libspdk_event_scheduler.a 00:03:48.286 LIB libspdk_event_vfu_tgt.a 00:03:48.286 LIB libspdk_event_fsdev.a 00:03:48.286 SO libspdk_event_keyring.so.1.0 00:03:48.286 LIB libspdk_event_vmd.a 00:03:48.286 LIB libspdk_event_iobuf.a 00:03:48.286 LIB libspdk_event_vhost_blk.a 00:03:48.286 LIB libspdk_event_sock.a 00:03:48.286 SO libspdk_event_vfu_tgt.so.3.0 00:03:48.286 SO libspdk_event_scheduler.so.4.0 00:03:48.286 SO libspdk_event_fsdev.so.1.0 00:03:48.286 SO libspdk_event_vhost_blk.so.3.0 00:03:48.286 SO libspdk_event_sock.so.5.0 00:03:48.286 SO libspdk_event_vmd.so.6.0 00:03:48.286 SO libspdk_event_iobuf.so.3.0 00:03:48.286 SYMLINK libspdk_event_keyring.so 00:03:48.286 SYMLINK libspdk_event_scheduler.so 00:03:48.286 SYMLINK libspdk_event_vfu_tgt.so 00:03:48.286 SYMLINK libspdk_event_fsdev.so 00:03:48.286 SYMLINK libspdk_event_vhost_blk.so 00:03:48.286 SYMLINK libspdk_event_sock.so 00:03:48.286 SYMLINK libspdk_event_vmd.so 00:03:48.286 SYMLINK libspdk_event_iobuf.so 00:03:48.856 CC module/event/subsystems/accel/accel.o 00:03:48.856 LIB libspdk_event_accel.a 00:03:48.856 SO libspdk_event_accel.so.6.0 00:03:48.856 SYMLINK libspdk_event_accel.so 00:03:49.425 CC module/event/subsystems/bdev/bdev.o 00:03:49.425 LIB libspdk_event_bdev.a 00:03:49.425 SO libspdk_event_bdev.so.6.0 00:03:49.685 SYMLINK libspdk_event_bdev.so 00:03:49.945 CC module/event/subsystems/ublk/ublk.o 00:03:49.945 CC module/event/subsystems/scsi/scsi.o 00:03:49.945 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:49.945 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:49.945 CC module/event/subsystems/nbd/nbd.o 00:03:49.945 LIB libspdk_event_scsi.a 00:03:49.945 LIB libspdk_event_ublk.a 00:03:50.204 LIB libspdk_event_nbd.a 00:03:50.204 SO libspdk_event_scsi.so.6.0 00:03:50.204 SO libspdk_event_ublk.so.3.0 00:03:50.204 SO libspdk_event_nbd.so.6.0 00:03:50.204 SYMLINK libspdk_event_scsi.so 00:03:50.204 LIB libspdk_event_nvmf.a 00:03:50.204 SYMLINK libspdk_event_ublk.so 00:03:50.204 SYMLINK libspdk_event_nbd.so 00:03:50.204 SO libspdk_event_nvmf.so.6.0 00:03:50.204 SYMLINK libspdk_event_nvmf.so 00:03:50.464 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.464 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.725 LIB libspdk_event_vhost_scsi.a 00:03:50.725 LIB libspdk_event_iscsi.a 00:03:50.725 SO libspdk_event_vhost_scsi.so.3.0 00:03:50.725 SO libspdk_event_iscsi.so.6.0 00:03:50.725 SYMLINK libspdk_event_vhost_scsi.so 00:03:50.725 SYMLINK libspdk_event_iscsi.so 00:03:50.985 SO libspdk.so.6.0 00:03:50.985 SYMLINK libspdk.so 00:03:51.245 CC app/trace_record/trace_record.o 00:03:51.245 CXX app/trace/trace.o 00:03:51.245 CC app/spdk_top/spdk_top.o 00:03:51.245 CC test/rpc_client/rpc_client_test.o 00:03:51.245 CC app/spdk_nvme_identify/identify.o 00:03:51.245 TEST_HEADER include/spdk/accel.h 00:03:51.245 TEST_HEADER include/spdk/accel_module.h 00:03:51.245 TEST_HEADER include/spdk/assert.h 00:03:51.245 TEST_HEADER include/spdk/barrier.h 00:03:51.245 TEST_HEADER include/spdk/base64.h 00:03:51.245 TEST_HEADER include/spdk/bdev.h 00:03:51.245 CC app/spdk_nvme_perf/perf.o 00:03:51.245 TEST_HEADER include/spdk/bdev_module.h 00:03:51.245 TEST_HEADER include/spdk/bdev_zone.h 00:03:51.245 CC app/spdk_nvme_discover/discovery_aer.o 00:03:51.245 TEST_HEADER include/spdk/bit_array.h 00:03:51.245 TEST_HEADER include/spdk/bit_pool.h 00:03:51.245 CC app/spdk_lspci/spdk_lspci.o 00:03:51.245 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:51.245 TEST_HEADER include/spdk/blob_bdev.h 00:03:51.245 TEST_HEADER include/spdk/blobfs.h 00:03:51.245 TEST_HEADER include/spdk/blob.h 00:03:51.245 TEST_HEADER include/spdk/conf.h 00:03:51.245 TEST_HEADER include/spdk/config.h 00:03:51.245 TEST_HEADER include/spdk/crc64.h 00:03:51.245 TEST_HEADER include/spdk/dif.h 00:03:51.245 TEST_HEADER include/spdk/crc32.h 00:03:51.245 TEST_HEADER include/spdk/crc16.h 00:03:51.245 TEST_HEADER include/spdk/cpuset.h 00:03:51.245 TEST_HEADER include/spdk/dma.h 00:03:51.245 TEST_HEADER include/spdk/endian.h 00:03:51.245 TEST_HEADER include/spdk/env.h 00:03:51.245 TEST_HEADER include/spdk/env_dpdk.h 00:03:51.245 TEST_HEADER include/spdk/fd_group.h 00:03:51.245 TEST_HEADER include/spdk/event.h 00:03:51.245 TEST_HEADER include/spdk/fd.h 00:03:51.245 TEST_HEADER include/spdk/file.h 00:03:51.245 TEST_HEADER include/spdk/fsdev.h 00:03:51.245 TEST_HEADER include/spdk/fsdev_module.h 00:03:51.245 TEST_HEADER include/spdk/ftl.h 00:03:51.245 TEST_HEADER include/spdk/hexlify.h 00:03:51.245 TEST_HEADER include/spdk/gpt_spec.h 00:03:51.245 TEST_HEADER include/spdk/histogram_data.h 00:03:51.245 TEST_HEADER include/spdk/idxd_spec.h 00:03:51.245 TEST_HEADER include/spdk/idxd.h 00:03:51.245 TEST_HEADER include/spdk/init.h 00:03:51.245 TEST_HEADER include/spdk/ioat.h 00:03:51.245 TEST_HEADER include/spdk/ioat_spec.h 00:03:51.245 CC app/nvmf_tgt/nvmf_main.o 00:03:51.245 CC app/iscsi_tgt/iscsi_tgt.o 00:03:51.245 TEST_HEADER include/spdk/iscsi_spec.h 00:03:51.245 TEST_HEADER include/spdk/json.h 00:03:51.245 CC app/spdk_dd/spdk_dd.o 00:03:51.245 TEST_HEADER include/spdk/jsonrpc.h 00:03:51.245 TEST_HEADER include/spdk/keyring.h 00:03:51.245 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:51.245 TEST_HEADER include/spdk/likely.h 00:03:51.245 TEST_HEADER include/spdk/log.h 00:03:51.245 TEST_HEADER include/spdk/lvol.h 00:03:51.245 TEST_HEADER include/spdk/keyring_module.h 00:03:51.245 TEST_HEADER include/spdk/mmio.h 00:03:51.245 TEST_HEADER include/spdk/md5.h 00:03:51.245 TEST_HEADER include/spdk/memory.h 00:03:51.245 TEST_HEADER include/spdk/nbd.h 00:03:51.245 TEST_HEADER include/spdk/notify.h 00:03:51.245 TEST_HEADER include/spdk/net.h 00:03:51.245 TEST_HEADER include/spdk/nvme.h 00:03:51.245 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.245 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.245 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.245 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.510 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.510 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.510 TEST_HEADER include/spdk/nvmf.h 00:03:51.510 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.510 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.510 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.510 TEST_HEADER include/spdk/pci_ids.h 00:03:51.510 TEST_HEADER include/spdk/opal.h 00:03:51.510 TEST_HEADER include/spdk/opal_spec.h 00:03:51.510 TEST_HEADER include/spdk/queue.h 00:03:51.510 TEST_HEADER include/spdk/pipe.h 00:03:51.510 TEST_HEADER include/spdk/rpc.h 00:03:51.510 TEST_HEADER include/spdk/scheduler.h 00:03:51.510 TEST_HEADER include/spdk/reduce.h 00:03:51.510 TEST_HEADER include/spdk/scsi.h 00:03:51.510 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.510 TEST_HEADER include/spdk/sock.h 00:03:51.510 TEST_HEADER include/spdk/stdinc.h 00:03:51.510 TEST_HEADER include/spdk/string.h 00:03:51.510 TEST_HEADER include/spdk/thread.h 00:03:51.510 TEST_HEADER include/spdk/trace.h 00:03:51.510 TEST_HEADER include/spdk/trace_parser.h 00:03:51.510 CC app/spdk_tgt/spdk_tgt.o 00:03:51.510 TEST_HEADER include/spdk/ublk.h 00:03:51.510 TEST_HEADER include/spdk/util.h 00:03:51.510 TEST_HEADER include/spdk/uuid.h 00:03:51.510 TEST_HEADER include/spdk/tree.h 00:03:51.510 TEST_HEADER include/spdk/version.h 00:03:51.510 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.510 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.510 TEST_HEADER include/spdk/vhost.h 00:03:51.510 TEST_HEADER include/spdk/vmd.h 00:03:51.510 CXX test/cpp_headers/accel.o 00:03:51.510 TEST_HEADER include/spdk/zipf.h 00:03:51.510 CXX test/cpp_headers/accel_module.o 00:03:51.510 TEST_HEADER include/spdk/xor.h 00:03:51.510 CXX test/cpp_headers/assert.o 00:03:51.510 CXX test/cpp_headers/base64.o 00:03:51.510 CXX test/cpp_headers/bdev_module.o 00:03:51.510 CXX test/cpp_headers/barrier.o 00:03:51.510 CXX test/cpp_headers/bdev_zone.o 00:03:51.510 CXX test/cpp_headers/bdev.o 00:03:51.510 CXX test/cpp_headers/bit_array.o 00:03:51.510 CXX test/cpp_headers/blob_bdev.o 00:03:51.510 CXX test/cpp_headers/blobfs_bdev.o 00:03:51.510 CXX test/cpp_headers/bit_pool.o 00:03:51.510 CXX test/cpp_headers/blobfs.o 00:03:51.510 CXX test/cpp_headers/conf.o 00:03:51.510 CXX test/cpp_headers/config.o 00:03:51.510 CXX test/cpp_headers/blob.o 00:03:51.510 CXX test/cpp_headers/crc16.o 00:03:51.510 CXX test/cpp_headers/cpuset.o 00:03:51.510 CXX test/cpp_headers/crc64.o 00:03:51.510 CXX test/cpp_headers/crc32.o 00:03:51.510 CXX test/cpp_headers/dif.o 00:03:51.510 CXX test/cpp_headers/endian.o 00:03:51.510 CXX test/cpp_headers/dma.o 00:03:51.510 CXX test/cpp_headers/event.o 00:03:51.510 CXX test/cpp_headers/env.o 00:03:51.510 CXX test/cpp_headers/file.o 00:03:51.511 CXX test/cpp_headers/env_dpdk.o 00:03:51.511 CXX test/cpp_headers/fsdev_module.o 00:03:51.511 CXX test/cpp_headers/fsdev.o 00:03:51.511 CXX test/cpp_headers/fd_group.o 00:03:51.511 CXX test/cpp_headers/fd.o 00:03:51.511 CXX test/cpp_headers/ftl.o 00:03:51.511 CXX test/cpp_headers/histogram_data.o 00:03:51.511 CXX test/cpp_headers/gpt_spec.o 00:03:51.511 CXX test/cpp_headers/hexlify.o 00:03:51.511 CXX test/cpp_headers/init.o 00:03:51.511 CXX test/cpp_headers/idxd.o 00:03:51.511 CXX test/cpp_headers/idxd_spec.o 00:03:51.511 CXX test/cpp_headers/ioat.o 00:03:51.511 CXX test/cpp_headers/iscsi_spec.o 00:03:51.511 CXX test/cpp_headers/ioat_spec.o 00:03:51.511 CXX test/cpp_headers/keyring.o 00:03:51.511 CXX test/cpp_headers/json.o 00:03:51.511 CXX test/cpp_headers/jsonrpc.o 00:03:51.511 CXX test/cpp_headers/log.o 00:03:51.511 CXX test/cpp_headers/keyring_module.o 00:03:51.511 CXX test/cpp_headers/likely.o 00:03:51.511 CXX test/cpp_headers/lvol.o 00:03:51.511 CXX test/cpp_headers/md5.o 00:03:51.511 CXX test/cpp_headers/memory.o 00:03:51.511 CXX test/cpp_headers/nbd.o 00:03:51.511 CXX test/cpp_headers/mmio.o 00:03:51.511 CXX test/cpp_headers/notify.o 00:03:51.511 CXX test/cpp_headers/net.o 00:03:51.511 CXX test/cpp_headers/nvme.o 00:03:51.511 CXX test/cpp_headers/nvme_intel.o 00:03:51.511 CXX test/cpp_headers/nvme_ocssd.o 00:03:51.511 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:51.511 CXX test/cpp_headers/nvme_spec.o 00:03:51.511 CXX test/cpp_headers/nvmf_cmd.o 00:03:51.511 CXX test/cpp_headers/nvme_zns.o 00:03:51.511 CXX test/cpp_headers/nvmf.o 00:03:51.511 CXX test/cpp_headers/nvmf_spec.o 00:03:51.511 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:51.511 CXX test/cpp_headers/nvmf_transport.o 00:03:51.511 CXX test/cpp_headers/opal_spec.o 00:03:51.511 CXX test/cpp_headers/opal.o 00:03:51.511 CXX test/cpp_headers/pci_ids.o 00:03:51.511 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.511 CC test/app/histogram_perf/histogram_perf.o 00:03:51.511 CC test/env/memory/memory_ut.o 00:03:51.511 CC examples/ioat/verify/verify.o 00:03:51.511 CC app/fio/nvme/fio_plugin.o 00:03:51.790 CC test/thread/poller_perf/poller_perf.o 00:03:51.790 LINK spdk_lspci 00:03:51.790 CC test/env/vtophys/vtophys.o 00:03:51.790 CC test/app/jsoncat/jsoncat.o 00:03:51.790 CC examples/util/zipf/zipf.o 00:03:51.790 CC test/env/pci/pci_ut.o 00:03:51.790 CC test/dma/test_dma/test_dma.o 00:03:51.790 CC test/app/stub/stub.o 00:03:51.790 CC examples/ioat/perf/perf.o 00:03:51.790 CC test/app/bdev_svc/bdev_svc.o 00:03:51.790 CC app/fio/bdev/fio_plugin.o 00:03:51.790 LINK rpc_client_test 00:03:51.790 LINK nvmf_tgt 00:03:51.790 LINK iscsi_tgt 00:03:51.790 LINK interrupt_tgt 00:03:52.060 LINK spdk_trace_record 00:03:52.060 CC test/env/mem_callbacks/mem_callbacks.o 00:03:52.060 CXX test/cpp_headers/pipe.o 00:03:52.060 CXX test/cpp_headers/queue.o 00:03:52.060 CXX test/cpp_headers/reduce.o 00:03:52.060 CXX test/cpp_headers/rpc.o 00:03:52.060 CXX test/cpp_headers/scheduler.o 00:03:52.060 CXX test/cpp_headers/scsi.o 00:03:52.060 CXX test/cpp_headers/scsi_spec.o 00:03:52.060 CXX test/cpp_headers/sock.o 00:03:52.060 LINK histogram_perf 00:03:52.060 CXX test/cpp_headers/stdinc.o 00:03:52.060 CXX test/cpp_headers/string.o 00:03:52.060 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:52.060 LINK spdk_nvme_discover 00:03:52.060 CXX test/cpp_headers/thread.o 00:03:52.060 CXX test/cpp_headers/trace.o 00:03:52.060 CXX test/cpp_headers/trace_parser.o 00:03:52.060 CXX test/cpp_headers/tree.o 00:03:52.060 CXX test/cpp_headers/ublk.o 00:03:52.060 LINK env_dpdk_post_init 00:03:52.060 CXX test/cpp_headers/uuid.o 00:03:52.060 CXX test/cpp_headers/version.o 00:03:52.060 CXX test/cpp_headers/util.o 00:03:52.060 CXX test/cpp_headers/vfio_user_pci.o 00:03:52.060 CXX test/cpp_headers/vfio_user_spec.o 00:03:52.060 CXX test/cpp_headers/vmd.o 00:03:52.060 CXX test/cpp_headers/xor.o 00:03:52.060 CXX test/cpp_headers/zipf.o 00:03:52.060 CXX test/cpp_headers/vhost.o 00:03:52.060 LINK bdev_svc 00:03:52.321 LINK spdk_dd 00:03:52.321 LINK verify 00:03:52.321 LINK ioat_perf 00:03:52.321 LINK poller_perf 00:03:52.321 LINK spdk_tgt 00:03:52.321 LINK jsoncat 00:03:52.321 LINK vtophys 00:03:52.321 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.321 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.321 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.321 LINK zipf 00:03:52.321 LINK stub 00:03:52.581 LINK mem_callbacks 00:03:52.581 LINK spdk_trace 00:03:52.581 LINK pci_ut 00:03:52.581 LINK test_dma 00:03:52.840 LINK spdk_nvme_identify 00:03:52.840 LINK nvme_fuzz 00:03:52.840 LINK spdk_nvme 00:03:52.840 LINK spdk_bdev 00:03:52.840 LINK vhost_fuzz 00:03:52.840 CC test/event/reactor_perf/reactor_perf.o 00:03:52.840 CC test/event/reactor/reactor.o 00:03:52.840 LINK memory_ut 00:03:52.840 CC test/event/event_perf/event_perf.o 00:03:52.840 LINK spdk_nvme_perf 00:03:52.840 CC test/event/app_repeat/app_repeat.o 00:03:52.840 CC examples/idxd/perf/perf.o 00:03:52.840 CC examples/vmd/led/led.o 00:03:52.840 CC examples/vmd/lsvmd/lsvmd.o 00:03:52.840 CC test/event/scheduler/scheduler.o 00:03:52.840 CC examples/sock/hello_world/hello_sock.o 00:03:52.840 LINK spdk_top 00:03:52.840 CC app/vhost/vhost.o 00:03:52.840 CC examples/thread/thread/thread_ex.o 00:03:53.099 LINK reactor_perf 00:03:53.099 LINK reactor 00:03:53.099 LINK lsvmd 00:03:53.099 LINK event_perf 00:03:53.099 LINK led 00:03:53.099 LINK app_repeat 00:03:53.099 LINK hello_sock 00:03:53.099 LINK scheduler 00:03:53.099 LINK vhost 00:03:53.099 LINK idxd_perf 00:03:53.099 CC test/nvme/sgl/sgl.o 00:03:53.100 CC test/nvme/reserve/reserve.o 00:03:53.100 CC test/nvme/reset/reset.o 00:03:53.100 CC test/nvme/e2edp/nvme_dp.o 00:03:53.100 CC test/nvme/simple_copy/simple_copy.o 00:03:53.100 CC test/nvme/boot_partition/boot_partition.o 00:03:53.100 CC test/nvme/err_injection/err_injection.o 00:03:53.100 CC test/nvme/compliance/nvme_compliance.o 00:03:53.100 CC test/nvme/startup/startup.o 00:03:53.100 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.100 CC test/nvme/connect_stress/connect_stress.o 00:03:53.100 CC test/nvme/aer/aer.o 00:03:53.100 CC test/nvme/cuse/cuse.o 00:03:53.100 LINK thread 00:03:53.100 CC test/nvme/overhead/overhead.o 00:03:53.100 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.100 CC test/nvme/fdp/fdp.o 00:03:53.100 CC test/blobfs/mkfs/mkfs.o 00:03:53.100 CC test/accel/dif/dif.o 00:03:53.358 CC test/lvol/esnap/esnap.o 00:03:53.358 LINK boot_partition 00:03:53.358 LINK startup 00:03:53.358 LINK connect_stress 00:03:53.358 LINK fused_ordering 00:03:53.358 LINK err_injection 00:03:53.358 LINK reserve 00:03:53.358 LINK doorbell_aers 00:03:53.358 LINK simple_copy 00:03:53.359 LINK mkfs 00:03:53.359 LINK reset 00:03:53.359 LINK nvme_dp 00:03:53.359 LINK sgl 00:03:53.618 LINK overhead 00:03:53.618 LINK aer 00:03:53.618 LINK nvme_compliance 00:03:53.618 LINK fdp 00:03:53.618 CC examples/nvme/arbitration/arbitration.o 00:03:53.618 CC examples/nvme/hotplug/hotplug.o 00:03:53.618 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:53.618 CC examples/nvme/hello_world/hello_world.o 00:03:53.618 CC examples/nvme/reconnect/reconnect.o 00:03:53.618 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:53.618 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.618 CC examples/nvme/abort/abort.o 00:03:53.618 CC examples/accel/perf/accel_perf.o 00:03:53.618 CC examples/blob/hello_world/hello_blob.o 00:03:53.618 CC examples/blob/cli/blobcli.o 00:03:53.877 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:53.877 LINK pmr_persistence 00:03:53.877 LINK iscsi_fuzz 00:03:53.877 LINK cmb_copy 00:03:53.877 LINK dif 00:03:53.877 LINK hotplug 00:03:53.877 LINK hello_world 00:03:53.877 LINK arbitration 00:03:53.877 LINK reconnect 00:03:53.877 LINK abort 00:03:53.877 LINK hello_blob 00:03:54.137 LINK hello_fsdev 00:03:54.137 LINK nvme_manage 00:03:54.137 LINK accel_perf 00:03:54.137 LINK blobcli 00:03:54.397 LINK cuse 00:03:54.397 CC test/bdev/bdevio/bdevio.o 00:03:54.656 CC examples/bdev/hello_world/hello_bdev.o 00:03:54.656 CC examples/bdev/bdevperf/bdevperf.o 00:03:54.656 LINK bdevio 00:03:54.915 LINK hello_bdev 00:03:55.173 LINK bdevperf 00:03:55.741 CC examples/nvmf/nvmf/nvmf.o 00:03:56.000 LINK nvmf 00:03:56.938 LINK esnap 00:03:57.198 00:03:57.198 real 0m55.444s 00:03:57.198 user 6m48.823s 00:03:57.198 sys 3m2.088s 00:03:57.198 12:09:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:57.198 12:09:24 make -- common/autotest_common.sh@10 -- $ set +x 00:03:57.198 ************************************ 00:03:57.198 END TEST make 00:03:57.198 ************************************ 00:03:57.198 12:09:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.198 12:09:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.198 12:09:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.198 12:09:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.198 12:09:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.198 12:09:24 -- pm/common@44 -- $ pid=7590 00:03:57.198 12:09:24 -- pm/common@50 -- $ kill -TERM 7590 00:03:57.198 12:09:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.198 12:09:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.198 12:09:24 -- pm/common@44 -- $ pid=7592 00:03:57.198 12:09:24 -- pm/common@50 -- $ kill -TERM 7592 00:03:57.198 12:09:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.198 12:09:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:57.198 12:09:24 -- pm/common@44 -- $ pid=7593 00:03:57.198 12:09:24 -- pm/common@50 -- $ kill -TERM 7593 00:03:57.198 12:09:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.198 12:09:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:57.198 12:09:24 -- pm/common@44 -- $ pid=7617 00:03:57.198 12:09:24 -- pm/common@50 -- $ sudo -E kill -TERM 7617 00:03:57.198 12:09:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:57.198 12:09:24 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:57.198 12:09:24 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:57.198 12:09:24 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:57.198 12:09:24 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:57.458 12:09:24 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:57.458 12:09:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.458 12:09:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.458 12:09:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.458 12:09:24 -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.458 12:09:24 -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.458 12:09:24 -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.458 12:09:24 -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.458 12:09:24 -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.458 12:09:24 -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.458 12:09:24 -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.458 12:09:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.458 12:09:24 -- scripts/common.sh@344 -- # case "$op" in 00:03:57.458 12:09:24 -- scripts/common.sh@345 -- # : 1 00:03:57.458 12:09:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.458 12:09:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.458 12:09:24 -- scripts/common.sh@365 -- # decimal 1 00:03:57.458 12:09:24 -- scripts/common.sh@353 -- # local d=1 00:03:57.458 12:09:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.458 12:09:24 -- scripts/common.sh@355 -- # echo 1 00:03:57.458 12:09:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.458 12:09:24 -- scripts/common.sh@366 -- # decimal 2 00:03:57.458 12:09:24 -- scripts/common.sh@353 -- # local d=2 00:03:57.458 12:09:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.458 12:09:24 -- scripts/common.sh@355 -- # echo 2 00:03:57.458 12:09:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.458 12:09:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.458 12:09:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.458 12:09:24 -- scripts/common.sh@368 -- # return 0 00:03:57.458 12:09:24 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.458 12:09:24 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:57.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.458 --rc genhtml_branch_coverage=1 00:03:57.458 --rc genhtml_function_coverage=1 00:03:57.458 --rc genhtml_legend=1 00:03:57.458 --rc geninfo_all_blocks=1 00:03:57.458 --rc geninfo_unexecuted_blocks=1 00:03:57.458 00:03:57.458 ' 00:03:57.458 12:09:24 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:57.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.458 --rc genhtml_branch_coverage=1 00:03:57.458 --rc genhtml_function_coverage=1 00:03:57.458 --rc genhtml_legend=1 00:03:57.458 --rc geninfo_all_blocks=1 00:03:57.458 --rc geninfo_unexecuted_blocks=1 00:03:57.458 00:03:57.458 ' 00:03:57.458 12:09:24 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:57.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.458 --rc genhtml_branch_coverage=1 00:03:57.458 --rc genhtml_function_coverage=1 00:03:57.458 --rc genhtml_legend=1 00:03:57.458 --rc geninfo_all_blocks=1 00:03:57.458 --rc geninfo_unexecuted_blocks=1 00:03:57.458 00:03:57.458 ' 00:03:57.458 12:09:24 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:57.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.458 --rc genhtml_branch_coverage=1 00:03:57.458 --rc genhtml_function_coverage=1 00:03:57.458 --rc genhtml_legend=1 00:03:57.458 --rc geninfo_all_blocks=1 00:03:57.458 --rc geninfo_unexecuted_blocks=1 00:03:57.458 00:03:57.458 ' 00:03:57.458 12:09:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.458 12:09:24 -- nvmf/common.sh@7 -- # uname -s 00:03:57.458 12:09:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.458 12:09:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.458 12:09:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.458 12:09:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.458 12:09:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.458 12:09:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.458 12:09:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.458 12:09:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.458 12:09:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.458 12:09:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.458 12:09:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:57.458 12:09:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:57.458 12:09:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.458 12:09:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.458 12:09:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.458 12:09:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.458 12:09:25 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.458 12:09:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.458 12:09:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.458 12:09:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.458 12:09:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.458 12:09:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.458 12:09:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.458 12:09:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.458 12:09:25 -- paths/export.sh@5 -- # export PATH 00:03:57.458 12:09:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.458 12:09:25 -- nvmf/common.sh@51 -- # : 0 00:03:57.458 12:09:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.458 12:09:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.458 12:09:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.458 12:09:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.458 12:09:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.458 12:09:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.459 12:09:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.459 12:09:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.459 12:09:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.459 12:09:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.459 12:09:25 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.459 12:09:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.459 12:09:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.459 12:09:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.459 12:09:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.459 12:09:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.459 12:09:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.459 12:09:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.459 12:09:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.459 12:09:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.459 12:09:25 -- spdk/autotest.sh@48 -- # udevadm_pid=88062 00:03:57.459 12:09:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:57.459 12:09:25 -- pm/common@17 -- # local monitor 00:03:57.459 12:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.459 12:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.459 12:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.459 12:09:25 -- pm/common@21 -- # date +%s 00:03:57.459 12:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.459 12:09:25 -- pm/common@21 -- # date +%s 00:03:57.459 12:09:25 -- pm/common@25 -- # sleep 1 00:03:57.459 12:09:25 -- pm/common@21 -- # date +%s 00:03:57.459 12:09:25 -- pm/common@21 -- # date +%s 00:03:57.459 12:09:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734088165 00:03:57.459 12:09:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734088165 00:03:57.459 12:09:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734088165 00:03:57.459 12:09:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734088165 00:03:57.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734088165_collect-vmstat.pm.log 00:03:57.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734088165_collect-cpu-load.pm.log 00:03:57.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734088165_collect-cpu-temp.pm.log 00:03:57.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734088165_collect-bmc-pm.bmc.pm.log 00:03:58.398 12:09:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.398 12:09:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:58.398 12:09:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.398 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:03:58.657 12:09:26 -- spdk/autotest.sh@59 -- # create_test_list 00:03:58.657 12:09:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:58.657 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:03:58.657 12:09:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:58.657 12:09:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.657 12:09:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.657 12:09:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:58.657 12:09:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.657 12:09:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:58.657 12:09:26 -- common/autotest_common.sh@1457 -- # uname 00:03:58.657 12:09:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:58.657 12:09:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:58.657 12:09:26 -- common/autotest_common.sh@1477 -- # uname 00:03:58.657 12:09:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:58.657 12:09:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:58.657 12:09:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:58.657 lcov: LCOV version 1.15 00:03:58.657 12:09:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:16.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:16.762 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.356 12:09:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:23.356 12:09:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.356 12:09:50 -- common/autotest_common.sh@10 -- # set +x 00:04:23.356 12:09:50 -- spdk/autotest.sh@78 -- # rm -f 00:04:23.356 12:09:50 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.653 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:26.653 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:26.653 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:26.653 12:09:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:26.653 12:09:54 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:26.653 12:09:54 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:26.653 12:09:54 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:26.653 12:09:54 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:26.653 12:09:54 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:26.653 12:09:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:26.653 12:09:54 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:26.653 12:09:54 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:26.653 12:09:54 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:26.653 12:09:54 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:26.653 12:09:54 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.653 12:09:54 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:26.653 12:09:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:26.653 12:09:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.653 12:09:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.653 12:09:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:26.653 12:09:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:26.653 12:09:54 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:26.653 No valid GPT data, bailing 00:04:26.653 12:09:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.653 12:09:54 -- scripts/common.sh@394 -- # pt= 00:04:26.653 12:09:54 -- scripts/common.sh@395 -- # return 1 00:04:26.653 12:09:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:26.653 1+0 records in 00:04:26.653 1+0 records out 00:04:26.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540084 s, 194 MB/s 00:04:26.653 12:09:54 -- spdk/autotest.sh@105 -- # sync 00:04:26.653 12:09:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:26.653 12:09:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:26.653 12:09:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:31.936 12:09:59 -- spdk/autotest.sh@111 -- # uname -s 00:04:31.936 12:09:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:31.936 12:09:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:31.936 12:09:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:34.478 Hugepages 00:04:34.478 node hugesize free / total 00:04:34.478 node0 1048576kB 0 / 0 00:04:34.478 node0 2048kB 0 / 0 00:04:34.478 node1 1048576kB 0 / 0 00:04:34.478 node1 2048kB 0 / 0 00:04:34.478 00:04:34.478 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:34.478 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:34.478 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:34.478 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:34.738 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:34.738 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:34.738 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:34.738 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:34.738 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:34.738 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:34.739 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:34.739 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:34.739 12:10:02 -- spdk/autotest.sh@117 -- # uname -s 00:04:34.739 12:10:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:34.739 12:10:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:34.739 12:10:02 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.036 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.036 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.607 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.607 12:10:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:39.989 12:10:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:39.989 12:10:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:39.989 12:10:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.989 12:10:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:39.989 12:10:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:39.989 12:10:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:39.989 12:10:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.989 12:10:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:39.989 12:10:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:39.989 12:10:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:39.989 12:10:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:39.989 12:10:07 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.530 Waiting for block devices as requested 00:04:42.530 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:42.790 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:42.790 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:42.790 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:42.790 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:43.050 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:43.050 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:43.050 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:43.310 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:43.310 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:43.310 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:43.570 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:43.570 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:43.570 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:43.570 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:43.831 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:43.831 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:43.831 12:10:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.831 12:10:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:43.831 12:10:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:43.831 12:10:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:43.831 12:10:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.831 12:10:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.831 12:10:11 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:43.831 12:10:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.831 12:10:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.831 12:10:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:43.831 12:10:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.831 12:10:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.831 12:10:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.831 12:10:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.831 12:10:11 -- common/autotest_common.sh@1543 -- # continue 00:04:43.831 12:10:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:43.831 12:10:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.831 12:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:44.092 12:10:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:44.092 12:10:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.092 12:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:44.092 12:10:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.635 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.895 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.841 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:47.841 12:10:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:47.841 12:10:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.841 12:10:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.841 12:10:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:47.841 12:10:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:47.841 12:10:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:47.841 12:10:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:47.841 12:10:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:47.841 12:10:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:47.841 12:10:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:47.841 12:10:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:47.841 12:10:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:47.841 12:10:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:47.841 12:10:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.841 12:10:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.841 12:10:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:48.101 12:10:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:48.101 12:10:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:48.101 12:10:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:48.101 12:10:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:48.101 12:10:15 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:48.101 12:10:15 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:48.101 12:10:15 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:48.101 12:10:15 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:48.101 12:10:15 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:48.101 12:10:15 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:48.102 12:10:15 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=101998 00:04:48.102 12:10:15 -- common/autotest_common.sh@1585 -- # waitforlisten 101998 00:04:48.102 12:10:15 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.102 12:10:15 -- common/autotest_common.sh@835 -- # '[' -z 101998 ']' 00:04:48.102 12:10:15 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.102 12:10:15 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.102 12:10:15 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.102 12:10:15 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.102 12:10:15 -- common/autotest_common.sh@10 -- # set +x 00:04:48.102 [2024-12-13 12:10:15.650722] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:48.102 [2024-12-13 12:10:15.650770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101998 ] 00:04:48.102 [2024-12-13 12:10:15.727436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.102 [2024-12-13 12:10:15.750448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.362 12:10:15 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.362 12:10:15 -- common/autotest_common.sh@868 -- # return 0 00:04:48.362 12:10:15 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:48.362 12:10:15 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:48.362 12:10:15 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:51.660 nvme0n1 00:04:51.660 12:10:18 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:51.660 [2024-12-13 12:10:19.146834] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:51.660 [2024-12-13 12:10:19.146863] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:51.660 request: 00:04:51.660 { 00:04:51.660 "nvme_ctrlr_name": "nvme0", 00:04:51.660 "password": "test", 00:04:51.660 "method": "bdev_nvme_opal_revert", 00:04:51.660 "req_id": 1 00:04:51.660 } 00:04:51.660 Got JSON-RPC error response 00:04:51.660 response: 00:04:51.660 { 00:04:51.660 "code": -32603, 00:04:51.660 "message": "Internal error" 00:04:51.660 } 00:04:51.660 12:10:19 -- common/autotest_common.sh@1591 -- # true 00:04:51.660 12:10:19 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:51.660 12:10:19 -- common/autotest_common.sh@1595 -- # killprocess 101998 00:04:51.660 12:10:19 -- common/autotest_common.sh@954 -- # '[' -z 101998 ']' 00:04:51.660 12:10:19 -- common/autotest_common.sh@958 -- # kill -0 101998 00:04:51.660 12:10:19 -- common/autotest_common.sh@959 -- # uname 00:04:51.660 12:10:19 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.660 12:10:19 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101998 00:04:51.660 12:10:19 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.660 12:10:19 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.660 12:10:19 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101998' 00:04:51.660 killing process with pid 101998 00:04:51.660 12:10:19 -- common/autotest_common.sh@973 -- # kill 101998 00:04:51.660 12:10:19 -- common/autotest_common.sh@978 -- # wait 101998 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.660 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:51.661 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:53.570 12:10:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:53.570 12:10:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:53.570 12:10:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:53.570 12:10:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:53.570 12:10:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:53.570 12:10:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.570 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:04:53.570 12:10:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:53.570 12:10:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.570 12:10:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.570 12:10:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.570 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:04:53.570 ************************************ 00:04:53.570 START TEST env 00:04:53.570 ************************************ 00:04:53.570 12:10:20 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.570 * Looking for test storage... 00:04:53.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:53.570 12:10:20 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.570 12:10:20 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.570 12:10:20 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.570 12:10:21 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.570 12:10:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.570 12:10:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.571 12:10:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.571 12:10:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.571 12:10:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.571 12:10:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.571 12:10:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.571 12:10:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.571 12:10:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.571 12:10:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.571 12:10:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.571 12:10:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:53.571 12:10:21 env -- scripts/common.sh@345 -- # : 1 00:04:53.571 12:10:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.571 12:10:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.571 12:10:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:53.571 12:10:21 env -- scripts/common.sh@353 -- # local d=1 00:04:53.571 12:10:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.571 12:10:21 env -- scripts/common.sh@355 -- # echo 1 00:04:53.571 12:10:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.571 12:10:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:53.571 12:10:21 env -- scripts/common.sh@353 -- # local d=2 00:04:53.571 12:10:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.571 12:10:21 env -- scripts/common.sh@355 -- # echo 2 00:04:53.571 12:10:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.571 12:10:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.571 12:10:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.571 12:10:21 env -- scripts/common.sh@368 -- # return 0 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.571 --rc genhtml_branch_coverage=1 00:04:53.571 --rc genhtml_function_coverage=1 00:04:53.571 --rc genhtml_legend=1 00:04:53.571 --rc geninfo_all_blocks=1 00:04:53.571 --rc geninfo_unexecuted_blocks=1 00:04:53.571 00:04:53.571 ' 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.571 --rc genhtml_branch_coverage=1 00:04:53.571 --rc genhtml_function_coverage=1 00:04:53.571 --rc genhtml_legend=1 00:04:53.571 --rc geninfo_all_blocks=1 00:04:53.571 --rc geninfo_unexecuted_blocks=1 00:04:53.571 00:04:53.571 ' 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.571 --rc genhtml_branch_coverage=1 00:04:53.571 --rc genhtml_function_coverage=1 00:04:53.571 --rc genhtml_legend=1 00:04:53.571 --rc geninfo_all_blocks=1 00:04:53.571 --rc geninfo_unexecuted_blocks=1 00:04:53.571 00:04:53.571 ' 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.571 --rc genhtml_branch_coverage=1 00:04:53.571 --rc genhtml_function_coverage=1 00:04:53.571 --rc genhtml_legend=1 00:04:53.571 --rc geninfo_all_blocks=1 00:04:53.571 --rc geninfo_unexecuted_blocks=1 00:04:53.571 00:04:53.571 ' 00:04:53.571 12:10:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.571 12:10:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.571 12:10:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.571 ************************************ 00:04:53.571 START TEST env_memory 00:04:53.571 ************************************ 00:04:53.571 12:10:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.571 00:04:53.571 00:04:53.571 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.571 http://cunit.sourceforge.net/ 00:04:53.571 00:04:53.571 00:04:53.571 Suite: memory 00:04:53.571 Test: alloc and free memory map ...[2024-12-13 12:10:21.133682] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.571 passed 00:04:53.571 Test: mem map translation ...[2024-12-13 12:10:21.152432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.571 [2024-12-13 12:10:21.152444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.571 [2024-12-13 12:10:21.152477] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.571 [2024-12-13 12:10:21.152498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.571 passed 00:04:53.571 Test: mem map registration ...[2024-12-13 12:10:21.188531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:53.571 [2024-12-13 12:10:21.188544] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:53.571 passed 00:04:53.571 Test: mem map adjacent registrations ...passed 00:04:53.571 00:04:53.571 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.571 suites 1 1 n/a 0 0 00:04:53.571 tests 4 4 4 0 0 00:04:53.571 asserts 152 152 152 0 n/a 00:04:53.571 00:04:53.571 Elapsed time = 0.123 seconds 00:04:53.571 00:04:53.571 real 0m0.132s 00:04:53.571 user 0m0.126s 00:04:53.571 sys 0m0.005s 00:04:53.571 12:10:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.571 12:10:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:53.571 ************************************ 00:04:53.571 END TEST env_memory 00:04:53.571 ************************************ 00:04:53.832 12:10:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.832 12:10:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.832 12:10:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.832 12:10:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.832 ************************************ 00:04:53.832 START TEST env_vtophys 00:04:53.832 ************************************ 00:04:53.832 12:10:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.832 EAL: lib.eal log level changed from notice to debug 00:04:53.832 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.832 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.832 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.832 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.832 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.832 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.832 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.832 EAL: Detected lcore 7 as core 8 on socket 0 00:04:53.832 EAL: Detected lcore 8 as core 9 on socket 0 00:04:53.832 EAL: Detected lcore 9 as core 10 on socket 0 00:04:53.832 EAL: Detected lcore 10 as core 11 on socket 0 00:04:53.832 EAL: Detected lcore 11 as core 12 on socket 0 00:04:53.832 EAL: Detected lcore 12 as core 13 on socket 0 00:04:53.832 EAL: Detected lcore 13 as core 16 on socket 0 00:04:53.832 EAL: Detected lcore 14 as core 17 on socket 0 00:04:53.832 EAL: Detected lcore 15 as core 18 on socket 0 00:04:53.832 EAL: Detected lcore 16 as core 19 on socket 0 00:04:53.832 EAL: Detected lcore 17 as core 20 on socket 0 00:04:53.832 EAL: Detected lcore 18 as core 21 on socket 0 00:04:53.832 EAL: Detected lcore 19 as core 25 on socket 0 00:04:53.833 EAL: Detected lcore 20 as core 26 on socket 0 00:04:53.833 EAL: Detected lcore 21 as core 27 on socket 0 00:04:53.833 EAL: Detected lcore 22 as core 28 on socket 0 00:04:53.833 EAL: Detected lcore 23 as core 29 on socket 0 00:04:53.833 EAL: Detected lcore 24 as core 0 on socket 1 00:04:53.833 EAL: Detected lcore 25 as core 1 on socket 1 00:04:53.833 EAL: Detected lcore 26 as core 2 on socket 1 00:04:53.833 EAL: Detected lcore 27 as core 3 on socket 1 00:04:53.833 EAL: Detected lcore 28 as core 4 on socket 1 00:04:53.833 EAL: Detected lcore 29 as core 5 on socket 1 00:04:53.833 EAL: Detected lcore 30 as core 6 on socket 1 00:04:53.833 EAL: Detected lcore 31 as core 8 on socket 1 00:04:53.833 EAL: Detected lcore 32 as core 9 on socket 1 00:04:53.833 EAL: Detected lcore 33 as core 10 on socket 1 00:04:53.833 EAL: Detected lcore 34 as core 11 on socket 1 00:04:53.833 EAL: Detected lcore 35 as core 12 on socket 1 00:04:53.833 EAL: Detected lcore 36 as core 13 on socket 1 00:04:53.833 EAL: Detected lcore 37 as core 16 on socket 1 00:04:53.833 EAL: Detected lcore 38 as core 17 on socket 1 00:04:53.833 EAL: Detected lcore 39 as core 18 on socket 1 00:04:53.833 EAL: Detected lcore 40 as core 19 on socket 1 00:04:53.833 EAL: Detected lcore 41 as core 20 on socket 1 00:04:53.833 EAL: Detected lcore 42 as core 21 on socket 1 00:04:53.833 EAL: Detected lcore 43 as core 25 on socket 1 00:04:53.833 EAL: Detected lcore 44 as core 26 on socket 1 00:04:53.833 EAL: Detected lcore 45 as core 27 on socket 1 00:04:53.833 EAL: Detected lcore 46 as core 28 on socket 1 00:04:53.833 EAL: Detected lcore 47 as core 29 on socket 1 00:04:53.833 EAL: Detected lcore 48 as core 0 on socket 0 00:04:53.833 EAL: Detected lcore 49 as core 1 on socket 0 00:04:53.833 EAL: Detected lcore 50 as core 2 on socket 0 00:04:53.833 EAL: Detected lcore 51 as core 3 on socket 0 00:04:53.833 EAL: Detected lcore 52 as core 4 on socket 0 00:04:53.833 EAL: Detected lcore 53 as core 5 on socket 0 00:04:53.833 EAL: Detected lcore 54 as core 6 on socket 0 00:04:53.833 EAL: Detected lcore 55 as core 8 on socket 0 00:04:53.833 EAL: Detected lcore 56 as core 9 on socket 0 00:04:53.833 EAL: Detected lcore 57 as core 10 on socket 0 00:04:53.833 EAL: Detected lcore 58 as core 11 on socket 0 00:04:53.833 EAL: Detected lcore 59 as core 12 on socket 0 00:04:53.833 EAL: Detected lcore 60 as core 13 on socket 0 00:04:53.833 EAL: Detected lcore 61 as core 16 on socket 0 00:04:53.833 EAL: Detected lcore 62 as core 17 on socket 0 00:04:53.833 EAL: Detected lcore 63 as core 18 on socket 0 00:04:53.833 EAL: Detected lcore 64 as core 19 on socket 0 00:04:53.833 EAL: Detected lcore 65 as core 20 on socket 0 00:04:53.833 EAL: Detected lcore 66 as core 21 on socket 0 00:04:53.833 EAL: Detected lcore 67 as core 25 on socket 0 00:04:53.833 EAL: Detected lcore 68 as core 26 on socket 0 00:04:53.833 EAL: Detected lcore 69 as core 27 on socket 0 00:04:53.833 EAL: Detected lcore 70 as core 28 on socket 0 00:04:53.833 EAL: Detected lcore 71 as core 29 on socket 0 00:04:53.833 EAL: Detected lcore 72 as core 0 on socket 1 00:04:53.833 EAL: Detected lcore 73 as core 1 on socket 1 00:04:53.833 EAL: Detected lcore 74 as core 2 on socket 1 00:04:53.833 EAL: Detected lcore 75 as core 3 on socket 1 00:04:53.833 EAL: Detected lcore 76 as core 4 on socket 1 00:04:53.833 EAL: Detected lcore 77 as core 5 on socket 1 00:04:53.833 EAL: Detected lcore 78 as core 6 on socket 1 00:04:53.833 EAL: Detected lcore 79 as core 8 on socket 1 00:04:53.833 EAL: Detected lcore 80 as core 9 on socket 1 00:04:53.833 EAL: Detected lcore 81 as core 10 on socket 1 00:04:53.833 EAL: Detected lcore 82 as core 11 on socket 1 00:04:53.833 EAL: Detected lcore 83 as core 12 on socket 1 00:04:53.833 EAL: Detected lcore 84 as core 13 on socket 1 00:04:53.833 EAL: Detected lcore 85 as core 16 on socket 1 00:04:53.833 EAL: Detected lcore 86 as core 17 on socket 1 00:04:53.833 EAL: Detected lcore 87 as core 18 on socket 1 00:04:53.833 EAL: Detected lcore 88 as core 19 on socket 1 00:04:53.833 EAL: Detected lcore 89 as core 20 on socket 1 00:04:53.833 EAL: Detected lcore 90 as core 21 on socket 1 00:04:53.833 EAL: Detected lcore 91 as core 25 on socket 1 00:04:53.833 EAL: Detected lcore 92 as core 26 on socket 1 00:04:53.833 EAL: Detected lcore 93 as core 27 on socket 1 00:04:53.833 EAL: Detected lcore 94 as core 28 on socket 1 00:04:53.833 EAL: Detected lcore 95 as core 29 on socket 1 00:04:53.833 EAL: Maximum logical cores by configuration: 128 00:04:53.833 EAL: Detected CPU lcores: 96 00:04:53.833 EAL: Detected NUMA nodes: 2 00:04:53.833 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:53.833 EAL: Detected shared linkage of DPDK 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:53.833 EAL: Registered [vdev] bus. 00:04:53.833 EAL: bus.vdev log level changed from disabled to notice 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:53.833 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:53.833 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:53.833 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:53.833 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.833 EAL: No shared files mode enabled, IPC is disabled 00:04:53.833 EAL: Bus pci wants IOVA as 'DC' 00:04:53.833 EAL: Bus vdev wants IOVA as 'DC' 00:04:53.833 EAL: Buses did not request a specific IOVA mode. 00:04:53.833 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.833 EAL: Selected IOVA mode 'VA' 00:04:53.833 EAL: Probing VFIO support... 00:04:53.833 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.833 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.833 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.833 EAL: VFIO support initialized 00:04:53.833 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.833 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.833 EAL: Setting up physically contiguous memory... 00:04:53.833 EAL: Setting maximum number of open files to 524288 00:04:53.833 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.833 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.833 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.833 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.833 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.833 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.833 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.833 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.833 EAL: Hugepages will be freed exactly as allocated. 00:04:53.833 EAL: No shared files mode enabled, IPC is disabled 00:04:53.833 EAL: No shared files mode enabled, IPC is disabled 00:04:53.833 EAL: TSC frequency is ~2100000 KHz 00:04:53.833 EAL: Main lcore 0 is ready (tid=7f6d5c574a00;cpuset=[0]) 00:04:53.833 EAL: Trying to obtain current memory policy. 00:04:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.833 EAL: Restoring previous memory policy: 0 00:04:53.833 EAL: request: mp_malloc_sync 00:04:53.833 EAL: No shared files mode enabled, IPC is disabled 00:04:53.833 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.833 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:53.833 EAL: probe driver: 8086:37d2 net_i40e 00:04:53.833 EAL: Not managed by a supported kernel driver, skipped 00:04:53.833 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:53.833 EAL: probe driver: 8086:37d2 net_i40e 00:04:53.833 EAL: Not managed by a supported kernel driver, skipped 00:04:53.833 EAL: No shared files mode enabled, IPC is disabled 00:04:53.833 EAL: No shared files mode enabled, IPC is disabled 00:04:53.833 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.834 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.834 00:04:53.834 00:04:53.834 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.834 http://cunit.sourceforge.net/ 00:04:53.834 00:04:53.834 00:04:53.834 Suite: components_suite 00:04:53.834 Test: vtophys_malloc_test ...passed 00:04:53.834 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.834 EAL: Restoring previous memory policy: 4 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.834 EAL: request: mp_malloc_sync 00:04:53.834 EAL: No shared files mode enabled, IPC is disabled 00:04:53.834 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.834 EAL: Trying to obtain current memory policy. 00:04:53.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.094 EAL: Restoring previous memory policy: 4 00:04:54.094 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.094 EAL: request: mp_malloc_sync 00:04:54.094 EAL: No shared files mode enabled, IPC is disabled 00:04:54.094 EAL: Heap on socket 0 was expanded by 258MB 00:04:54.094 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.094 EAL: request: mp_malloc_sync 00:04:54.094 EAL: No shared files mode enabled, IPC is disabled 00:04:54.094 EAL: Heap on socket 0 was shrunk by 258MB 00:04:54.094 EAL: Trying to obtain current memory policy. 00:04:54.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.094 EAL: Restoring previous memory policy: 4 00:04:54.094 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.094 EAL: request: mp_malloc_sync 00:04:54.094 EAL: No shared files mode enabled, IPC is disabled 00:04:54.094 EAL: Heap on socket 0 was expanded by 514MB 00:04:54.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.354 EAL: request: mp_malloc_sync 00:04:54.354 EAL: No shared files mode enabled, IPC is disabled 00:04:54.354 EAL: Heap on socket 0 was shrunk by 514MB 00:04:54.354 EAL: Trying to obtain current memory policy. 00:04:54.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.614 EAL: Restoring previous memory policy: 4 00:04:54.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.614 EAL: request: mp_malloc_sync 00:04:54.614 EAL: No shared files mode enabled, IPC is disabled 00:04:54.614 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.875 EAL: request: mp_malloc_sync 00:04:54.875 EAL: No shared files mode enabled, IPC is disabled 00:04:54.875 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.875 passed 00:04:54.875 00:04:54.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.875 suites 1 1 n/a 0 0 00:04:54.875 tests 2 2 2 0 0 00:04:54.875 asserts 497 497 497 0 n/a 00:04:54.875 00:04:54.875 Elapsed time = 0.970 seconds 00:04:54.875 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.875 EAL: request: mp_malloc_sync 00:04:54.875 EAL: No shared files mode enabled, IPC is disabled 00:04:54.875 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.875 EAL: No shared files mode enabled, IPC is disabled 00:04:54.875 EAL: No shared files mode enabled, IPC is disabled 00:04:54.875 EAL: No shared files mode enabled, IPC is disabled 00:04:54.875 00:04:54.875 real 0m1.100s 00:04:54.875 user 0m0.650s 00:04:54.875 sys 0m0.423s 00:04:54.875 12:10:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.875 12:10:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.875 ************************************ 00:04:54.875 END TEST env_vtophys 00:04:54.875 ************************************ 00:04:54.875 12:10:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.875 12:10:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.875 12:10:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.875 12:10:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.875 ************************************ 00:04:54.875 START TEST env_pci 00:04:54.875 ************************************ 00:04:54.875 12:10:22 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.875 00:04:54.875 00:04:54.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.875 http://cunit.sourceforge.net/ 00:04:54.875 00:04:54.875 00:04:54.875 Suite: pci 00:04:54.875 Test: pci_hook ...[2024-12-13 12:10:22.487585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103274 has claimed it 00:04:54.875 EAL: Cannot find device (10000:00:01.0) 00:04:54.875 EAL: Failed to attach device on primary process 00:04:54.875 passed 00:04:54.875 00:04:54.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.875 suites 1 1 n/a 0 0 00:04:54.875 tests 1 1 1 0 0 00:04:54.875 asserts 25 25 25 0 n/a 00:04:54.875 00:04:54.875 Elapsed time = 0.025 seconds 00:04:54.875 00:04:54.875 real 0m0.043s 00:04:54.875 user 0m0.014s 00:04:54.875 sys 0m0.029s 00:04:54.875 12:10:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.875 12:10:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.875 ************************************ 00:04:54.875 END TEST env_pci 00:04:54.875 ************************************ 00:04:54.875 12:10:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.875 12:10:22 env -- env/env.sh@15 -- # uname 00:04:54.875 12:10:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.875 12:10:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.875 12:10:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.875 12:10:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:54.875 12:10:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.875 12:10:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.135 ************************************ 00:04:55.135 START TEST env_dpdk_post_init 00:04:55.135 ************************************ 00:04:55.135 12:10:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.135 EAL: Detected CPU lcores: 96 00:04:55.135 EAL: Detected NUMA nodes: 2 00:04:55.135 EAL: Detected shared linkage of DPDK 00:04:55.135 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.135 EAL: Selected IOVA mode 'VA' 00:04:55.135 EAL: VFIO support initialized 00:04:55.135 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.135 EAL: Using IOMMU type 1 (Type 1) 00:04:55.135 EAL: Ignore mapping IO port bar(1) 00:04:55.135 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:55.135 EAL: Ignore mapping IO port bar(1) 00:04:55.135 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:55.135 EAL: Ignore mapping IO port bar(1) 00:04:55.135 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:55.135 EAL: Ignore mapping IO port bar(1) 00:04:55.135 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:55.135 EAL: Ignore mapping IO port bar(1) 00:04:55.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:55.136 EAL: Ignore mapping IO port bar(1) 00:04:55.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:55.136 EAL: Ignore mapping IO port bar(1) 00:04:55.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:55.136 EAL: Ignore mapping IO port bar(1) 00:04:55.136 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:56.075 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:56.075 EAL: Ignore mapping IO port bar(1) 00:04:56.075 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:59.368 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:59.368 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:59.368 Starting DPDK initialization... 00:04:59.368 Starting SPDK post initialization... 00:04:59.368 SPDK NVMe probe 00:04:59.368 Attaching to 0000:5e:00.0 00:04:59.368 Attached to 0000:5e:00.0 00:04:59.368 Cleaning up... 00:04:59.368 00:04:59.368 real 0m4.339s 00:04:59.368 user 0m3.255s 00:04:59.368 sys 0m0.153s 00:04:59.368 12:10:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.368 12:10:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.368 ************************************ 00:04:59.368 END TEST env_dpdk_post_init 00:04:59.368 ************************************ 00:04:59.368 12:10:26 env -- env/env.sh@26 -- # uname 00:04:59.368 12:10:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.368 12:10:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.368 12:10:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.368 12:10:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.368 12:10:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.368 ************************************ 00:04:59.368 START TEST env_mem_callbacks 00:04:59.368 ************************************ 00:04:59.368 12:10:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.368 EAL: Detected CPU lcores: 96 00:04:59.368 EAL: Detected NUMA nodes: 2 00:04:59.368 EAL: Detected shared linkage of DPDK 00:04:59.368 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.368 EAL: Selected IOVA mode 'VA' 00:04:59.368 EAL: VFIO support initialized 00:04:59.368 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.368 00:04:59.368 00:04:59.368 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.368 http://cunit.sourceforge.net/ 00:04:59.368 00:04:59.368 00:04:59.368 Suite: memory 00:04:59.368 Test: test ... 00:04:59.368 register 0x200000200000 2097152 00:04:59.368 malloc 3145728 00:04:59.368 register 0x200000400000 4194304 00:04:59.368 buf 0x200000500000 len 3145728 PASSED 00:04:59.368 malloc 64 00:04:59.368 buf 0x2000004fff40 len 64 PASSED 00:04:59.368 malloc 4194304 00:04:59.368 register 0x200000800000 6291456 00:04:59.368 buf 0x200000a00000 len 4194304 PASSED 00:04:59.368 free 0x200000500000 3145728 00:04:59.368 free 0x2000004fff40 64 00:04:59.368 unregister 0x200000400000 4194304 PASSED 00:04:59.368 free 0x200000a00000 4194304 00:04:59.368 unregister 0x200000800000 6291456 PASSED 00:04:59.368 malloc 8388608 00:04:59.368 register 0x200000400000 10485760 00:04:59.368 buf 0x200000600000 len 8388608 PASSED 00:04:59.368 free 0x200000600000 8388608 00:04:59.368 unregister 0x200000400000 10485760 PASSED 00:04:59.368 passed 00:04:59.368 00:04:59.368 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.368 suites 1 1 n/a 0 0 00:04:59.368 tests 1 1 1 0 0 00:04:59.368 asserts 15 15 15 0 n/a 00:04:59.368 00:04:59.368 Elapsed time = 0.008 seconds 00:04:59.368 00:04:59.368 real 0m0.056s 00:04:59.368 user 0m0.017s 00:04:59.368 sys 0m0.039s 00:04:59.368 12:10:27 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.368 12:10:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.368 ************************************ 00:04:59.368 END TEST env_mem_callbacks 00:04:59.368 ************************************ 00:04:59.628 00:04:59.628 real 0m6.212s 00:04:59.628 user 0m4.318s 00:04:59.628 sys 0m0.972s 00:04:59.628 12:10:27 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.628 12:10:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.628 ************************************ 00:04:59.628 END TEST env 00:04:59.628 ************************************ 00:04:59.628 12:10:27 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.628 12:10:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.628 12:10:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.628 12:10:27 -- common/autotest_common.sh@10 -- # set +x 00:04:59.628 ************************************ 00:04:59.628 START TEST rpc 00:04:59.628 ************************************ 00:04:59.628 12:10:27 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.628 * Looking for test storage... 00:04:59.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.628 12:10:27 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.628 12:10:27 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.628 12:10:27 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.628 12:10:27 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.628 12:10:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.628 12:10:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.628 12:10:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.628 12:10:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.628 12:10:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.628 12:10:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.628 12:10:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.628 12:10:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.628 12:10:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.889 12:10:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.889 12:10:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.889 12:10:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.889 12:10:27 rpc -- scripts/common.sh@345 -- # : 1 00:04:59.889 12:10:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.889 12:10:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.889 12:10:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.889 12:10:27 rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.889 12:10:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.889 12:10:27 rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.889 12:10:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.889 12:10:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.889 12:10:27 rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.889 12:10:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.889 12:10:27 rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.889 12:10:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.889 12:10:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.889 12:10:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.889 12:10:27 rpc -- scripts/common.sh@368 -- # return 0 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.889 --rc genhtml_branch_coverage=1 00:04:59.889 --rc genhtml_function_coverage=1 00:04:59.889 --rc genhtml_legend=1 00:04:59.889 --rc geninfo_all_blocks=1 00:04:59.889 --rc geninfo_unexecuted_blocks=1 00:04:59.889 00:04:59.889 ' 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.889 --rc genhtml_branch_coverage=1 00:04:59.889 --rc genhtml_function_coverage=1 00:04:59.889 --rc genhtml_legend=1 00:04:59.889 --rc geninfo_all_blocks=1 00:04:59.889 --rc geninfo_unexecuted_blocks=1 00:04:59.889 00:04:59.889 ' 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.889 --rc genhtml_branch_coverage=1 00:04:59.889 --rc genhtml_function_coverage=1 00:04:59.889 --rc genhtml_legend=1 00:04:59.889 --rc geninfo_all_blocks=1 00:04:59.889 --rc geninfo_unexecuted_blocks=1 00:04:59.889 00:04:59.889 ' 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.889 --rc genhtml_branch_coverage=1 00:04:59.889 --rc genhtml_function_coverage=1 00:04:59.889 --rc genhtml_legend=1 00:04:59.889 --rc geninfo_all_blocks=1 00:04:59.889 --rc geninfo_unexecuted_blocks=1 00:04:59.889 00:04:59.889 ' 00:04:59.889 12:10:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104089 00:04:59.889 12:10:27 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:59.889 12:10:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.889 12:10:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104089 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@835 -- # '[' -z 104089 ']' 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.889 12:10:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.889 [2024-12-13 12:10:27.393378] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:59.889 [2024-12-13 12:10:27.393420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104089 ] 00:04:59.889 [2024-12-13 12:10:27.463218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.889 [2024-12-13 12:10:27.485644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.889 [2024-12-13 12:10:27.485680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104089' to capture a snapshot of events at runtime. 00:04:59.889 [2024-12-13 12:10:27.485687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.889 [2024-12-13 12:10:27.485693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.889 [2024-12-13 12:10:27.485698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104089 for offline analysis/debug. 00:04:59.889 [2024-12-13 12:10:27.486191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.149 12:10:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.149 12:10:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.149 12:10:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.149 12:10:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.149 12:10:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.149 12:10:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.149 12:10:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.149 12:10:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.149 12:10:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 ************************************ 00:05:00.149 START TEST rpc_integrity 00:05:00.149 ************************************ 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.149 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.149 { 00:05:00.149 "name": "Malloc0", 00:05:00.149 "aliases": [ 00:05:00.149 "ef989f3d-6d50-4f76-bb12-04ebafd69bec" 00:05:00.149 ], 00:05:00.149 "product_name": "Malloc disk", 00:05:00.149 "block_size": 512, 00:05:00.149 "num_blocks": 16384, 00:05:00.149 "uuid": "ef989f3d-6d50-4f76-bb12-04ebafd69bec", 00:05:00.149 "assigned_rate_limits": { 00:05:00.149 "rw_ios_per_sec": 0, 00:05:00.149 "rw_mbytes_per_sec": 0, 00:05:00.149 "r_mbytes_per_sec": 0, 00:05:00.149 "w_mbytes_per_sec": 0 00:05:00.149 }, 00:05:00.149 "claimed": false, 00:05:00.149 "zoned": false, 00:05:00.149 "supported_io_types": { 00:05:00.149 "read": true, 00:05:00.149 "write": true, 00:05:00.149 "unmap": true, 00:05:00.149 "flush": true, 00:05:00.149 "reset": true, 00:05:00.149 "nvme_admin": false, 00:05:00.149 "nvme_io": false, 00:05:00.149 "nvme_io_md": false, 00:05:00.149 "write_zeroes": true, 00:05:00.149 "zcopy": true, 00:05:00.149 "get_zone_info": false, 00:05:00.149 "zone_management": false, 00:05:00.149 "zone_append": false, 00:05:00.149 "compare": false, 00:05:00.149 "compare_and_write": false, 00:05:00.149 "abort": true, 00:05:00.149 "seek_hole": false, 00:05:00.149 "seek_data": false, 00:05:00.149 "copy": true, 00:05:00.149 "nvme_iov_md": false 00:05:00.149 }, 00:05:00.149 "memory_domains": [ 00:05:00.149 { 00:05:00.149 "dma_device_id": "system", 00:05:00.149 "dma_device_type": 1 00:05:00.149 }, 00:05:00.149 { 00:05:00.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.149 "dma_device_type": 2 00:05:00.149 } 00:05:00.149 ], 00:05:00.149 "driver_specific": {} 00:05:00.149 } 00:05:00.149 ]' 00:05:00.149 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.150 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.409 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.409 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.409 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.409 [2024-12-13 12:10:27.855934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.409 [2024-12-13 12:10:27.855962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.409 [2024-12-13 12:10:27.855975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x93ea00 00:05:00.409 [2024-12-13 12:10:27.855981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.409 [2024-12-13 12:10:27.857027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.409 [2024-12-13 12:10:27.857048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.409 Passthru0 00:05:00.409 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.409 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.409 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.409 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.409 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.409 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.409 { 00:05:00.409 "name": "Malloc0", 00:05:00.409 "aliases": [ 00:05:00.409 "ef989f3d-6d50-4f76-bb12-04ebafd69bec" 00:05:00.409 ], 00:05:00.409 "product_name": "Malloc disk", 00:05:00.409 "block_size": 512, 00:05:00.409 "num_blocks": 16384, 00:05:00.409 "uuid": "ef989f3d-6d50-4f76-bb12-04ebafd69bec", 00:05:00.409 "assigned_rate_limits": { 00:05:00.409 "rw_ios_per_sec": 0, 00:05:00.409 "rw_mbytes_per_sec": 0, 00:05:00.409 "r_mbytes_per_sec": 0, 00:05:00.409 "w_mbytes_per_sec": 0 00:05:00.409 }, 00:05:00.409 "claimed": true, 00:05:00.409 "claim_type": "exclusive_write", 00:05:00.409 "zoned": false, 00:05:00.409 "supported_io_types": { 00:05:00.409 "read": true, 00:05:00.409 "write": true, 00:05:00.409 "unmap": true, 00:05:00.409 "flush": true, 00:05:00.409 "reset": true, 00:05:00.409 "nvme_admin": false, 00:05:00.409 "nvme_io": false, 00:05:00.409 "nvme_io_md": false, 00:05:00.409 "write_zeroes": true, 00:05:00.409 "zcopy": true, 00:05:00.409 "get_zone_info": false, 00:05:00.409 "zone_management": false, 00:05:00.409 "zone_append": false, 00:05:00.409 "compare": false, 00:05:00.409 "compare_and_write": false, 00:05:00.409 "abort": true, 00:05:00.409 "seek_hole": false, 00:05:00.409 "seek_data": false, 00:05:00.409 "copy": true, 00:05:00.409 "nvme_iov_md": false 00:05:00.409 }, 00:05:00.409 "memory_domains": [ 00:05:00.409 { 00:05:00.409 "dma_device_id": "system", 00:05:00.409 "dma_device_type": 1 00:05:00.409 }, 00:05:00.409 { 00:05:00.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.409 "dma_device_type": 2 00:05:00.409 } 00:05:00.409 ], 00:05:00.409 "driver_specific": {} 00:05:00.409 }, 00:05:00.409 { 00:05:00.409 "name": "Passthru0", 00:05:00.409 "aliases": [ 00:05:00.410 "2e96eb6a-ea2d-51c9-9f15-0b8e3f014e33" 00:05:00.410 ], 00:05:00.410 "product_name": "passthru", 00:05:00.410 "block_size": 512, 00:05:00.410 "num_blocks": 16384, 00:05:00.410 "uuid": "2e96eb6a-ea2d-51c9-9f15-0b8e3f014e33", 00:05:00.410 "assigned_rate_limits": { 00:05:00.410 "rw_ios_per_sec": 0, 00:05:00.410 "rw_mbytes_per_sec": 0, 00:05:00.410 "r_mbytes_per_sec": 0, 00:05:00.410 "w_mbytes_per_sec": 0 00:05:00.410 }, 00:05:00.410 "claimed": false, 00:05:00.410 "zoned": false, 00:05:00.410 "supported_io_types": { 00:05:00.410 "read": true, 00:05:00.410 "write": true, 00:05:00.410 "unmap": true, 00:05:00.410 "flush": true, 00:05:00.410 "reset": true, 00:05:00.410 "nvme_admin": false, 00:05:00.410 "nvme_io": false, 00:05:00.410 "nvme_io_md": false, 00:05:00.410 "write_zeroes": true, 00:05:00.410 "zcopy": true, 00:05:00.410 "get_zone_info": false, 00:05:00.410 "zone_management": false, 00:05:00.410 "zone_append": false, 00:05:00.410 "compare": false, 00:05:00.410 "compare_and_write": false, 00:05:00.410 "abort": true, 00:05:00.410 "seek_hole": false, 00:05:00.410 "seek_data": false, 00:05:00.410 "copy": true, 00:05:00.410 "nvme_iov_md": false 00:05:00.410 }, 00:05:00.410 "memory_domains": [ 00:05:00.410 { 00:05:00.410 "dma_device_id": "system", 00:05:00.410 "dma_device_type": 1 00:05:00.410 }, 00:05:00.410 { 00:05:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.410 "dma_device_type": 2 00:05:00.410 } 00:05:00.410 ], 00:05:00.410 "driver_specific": { 00:05:00.410 "passthru": { 00:05:00.410 "name": "Passthru0", 00:05:00.410 "base_bdev_name": "Malloc0" 00:05:00.410 } 00:05:00.410 } 00:05:00.410 } 00:05:00.410 ]' 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 12:10:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.410 12:10:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.410 12:10:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.410 00:05:00.410 real 0m0.281s 00:05:00.410 user 0m0.180s 00:05:00.410 sys 0m0.033s 00:05:00.410 12:10:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.410 12:10:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 ************************************ 00:05:00.410 END TEST rpc_integrity 00:05:00.410 ************************************ 00:05:00.410 12:10:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.410 12:10:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.410 12:10:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.410 12:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 ************************************ 00:05:00.410 START TEST rpc_plugins 00:05:00.410 ************************************ 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:00.410 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.410 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.410 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.410 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.410 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.410 { 00:05:00.410 "name": "Malloc1", 00:05:00.410 "aliases": [ 00:05:00.410 "4c57a5a5-24f8-486c-943b-566084b48e17" 00:05:00.410 ], 00:05:00.410 "product_name": "Malloc disk", 00:05:00.410 "block_size": 4096, 00:05:00.410 "num_blocks": 256, 00:05:00.410 "uuid": "4c57a5a5-24f8-486c-943b-566084b48e17", 00:05:00.410 "assigned_rate_limits": { 00:05:00.410 "rw_ios_per_sec": 0, 00:05:00.410 "rw_mbytes_per_sec": 0, 00:05:00.410 "r_mbytes_per_sec": 0, 00:05:00.410 "w_mbytes_per_sec": 0 00:05:00.410 }, 00:05:00.410 "claimed": false, 00:05:00.410 "zoned": false, 00:05:00.410 "supported_io_types": { 00:05:00.410 "read": true, 00:05:00.410 "write": true, 00:05:00.410 "unmap": true, 00:05:00.410 "flush": true, 00:05:00.410 "reset": true, 00:05:00.410 "nvme_admin": false, 00:05:00.410 "nvme_io": false, 00:05:00.410 "nvme_io_md": false, 00:05:00.410 "write_zeroes": true, 00:05:00.410 "zcopy": true, 00:05:00.410 "get_zone_info": false, 00:05:00.410 "zone_management": false, 00:05:00.410 "zone_append": false, 00:05:00.410 "compare": false, 00:05:00.410 "compare_and_write": false, 00:05:00.410 "abort": true, 00:05:00.410 "seek_hole": false, 00:05:00.410 "seek_data": false, 00:05:00.410 "copy": true, 00:05:00.410 "nvme_iov_md": false 00:05:00.410 }, 00:05:00.410 "memory_domains": [ 00:05:00.410 { 00:05:00.410 "dma_device_id": "system", 00:05:00.410 "dma_device_type": 1 00:05:00.410 }, 00:05:00.410 { 00:05:00.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.410 "dma_device_type": 2 00:05:00.410 } 00:05:00.410 ], 00:05:00.410 "driver_specific": {} 00:05:00.410 } 00:05:00.410 ]' 00:05:00.410 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.669 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.669 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.669 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.669 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.669 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.669 12:10:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.669 00:05:00.669 real 0m0.129s 00:05:00.669 user 0m0.072s 00:05:00.669 sys 0m0.023s 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.669 12:10:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.669 ************************************ 00:05:00.669 END TEST rpc_plugins 00:05:00.669 ************************************ 00:05:00.669 12:10:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.669 12:10:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.669 12:10:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.669 12:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.669 ************************************ 00:05:00.669 START TEST rpc_trace_cmd_test 00:05:00.669 ************************************ 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.669 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.669 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104089", 00:05:00.669 "tpoint_group_mask": "0x8", 00:05:00.669 "iscsi_conn": { 00:05:00.669 "mask": "0x2", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "scsi": { 00:05:00.669 "mask": "0x4", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "bdev": { 00:05:00.669 "mask": "0x8", 00:05:00.669 "tpoint_mask": "0xffffffffffffffff" 00:05:00.669 }, 00:05:00.669 "nvmf_rdma": { 00:05:00.669 "mask": "0x10", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "nvmf_tcp": { 00:05:00.669 "mask": "0x20", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "ftl": { 00:05:00.669 "mask": "0x40", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "blobfs": { 00:05:00.669 "mask": "0x80", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "dsa": { 00:05:00.669 "mask": "0x200", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "thread": { 00:05:00.669 "mask": "0x400", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "nvme_pcie": { 00:05:00.669 "mask": "0x800", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "iaa": { 00:05:00.669 "mask": "0x1000", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "nvme_tcp": { 00:05:00.669 "mask": "0x2000", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "bdev_nvme": { 00:05:00.669 "mask": "0x4000", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "sock": { 00:05:00.669 "mask": "0x8000", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.669 "blob": { 00:05:00.669 "mask": "0x10000", 00:05:00.669 "tpoint_mask": "0x0" 00:05:00.669 }, 00:05:00.670 "bdev_raid": { 00:05:00.670 "mask": "0x20000", 00:05:00.670 "tpoint_mask": "0x0" 00:05:00.670 }, 00:05:00.670 "scheduler": { 00:05:00.670 "mask": "0x40000", 00:05:00.670 "tpoint_mask": "0x0" 00:05:00.670 } 00:05:00.670 }' 00:05:00.670 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.670 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:00.670 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.670 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.929 00:05:00.929 real 0m0.222s 00:05:00.929 user 0m0.177s 00:05:00.929 sys 0m0.036s 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.929 12:10:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 ************************************ 00:05:00.929 END TEST rpc_trace_cmd_test 00:05:00.929 ************************************ 00:05:00.929 12:10:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.929 12:10:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.929 12:10:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.929 12:10:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.929 12:10:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.929 12:10:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 ************************************ 00:05:00.929 START TEST rpc_daemon_integrity 00:05:00.929 ************************************ 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.929 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.189 { 00:05:01.189 "name": "Malloc2", 00:05:01.189 "aliases": [ 00:05:01.189 "8426389e-89a2-455a-820d-fdbeea07e8d1" 00:05:01.189 ], 00:05:01.189 "product_name": "Malloc disk", 00:05:01.189 "block_size": 512, 00:05:01.189 "num_blocks": 16384, 00:05:01.189 "uuid": "8426389e-89a2-455a-820d-fdbeea07e8d1", 00:05:01.189 "assigned_rate_limits": { 00:05:01.189 "rw_ios_per_sec": 0, 00:05:01.189 "rw_mbytes_per_sec": 0, 00:05:01.189 "r_mbytes_per_sec": 0, 00:05:01.189 "w_mbytes_per_sec": 0 00:05:01.189 }, 00:05:01.189 "claimed": false, 00:05:01.189 "zoned": false, 00:05:01.189 "supported_io_types": { 00:05:01.189 "read": true, 00:05:01.189 "write": true, 00:05:01.189 "unmap": true, 00:05:01.189 "flush": true, 00:05:01.189 "reset": true, 00:05:01.189 "nvme_admin": false, 00:05:01.189 "nvme_io": false, 00:05:01.189 "nvme_io_md": false, 00:05:01.189 "write_zeroes": true, 00:05:01.189 "zcopy": true, 00:05:01.189 "get_zone_info": false, 00:05:01.189 "zone_management": false, 00:05:01.189 "zone_append": false, 00:05:01.189 "compare": false, 00:05:01.189 "compare_and_write": false, 00:05:01.189 "abort": true, 00:05:01.189 "seek_hole": false, 00:05:01.189 "seek_data": false, 00:05:01.189 "copy": true, 00:05:01.189 "nvme_iov_md": false 00:05:01.189 }, 00:05:01.189 "memory_domains": [ 00:05:01.189 { 00:05:01.189 "dma_device_id": "system", 00:05:01.189 "dma_device_type": 1 00:05:01.189 }, 00:05:01.189 { 00:05:01.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.189 "dma_device_type": 2 00:05:01.189 } 00:05:01.189 ], 00:05:01.189 "driver_specific": {} 00:05:01.189 } 00:05:01.189 ]' 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 [2024-12-13 12:10:28.682158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.189 [2024-12-13 12:10:28.682183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.189 [2024-12-13 12:10:28.682195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7fcac0 00:05:01.189 [2024-12-13 12:10:28.682201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.189 [2024-12-13 12:10:28.683129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.189 [2024-12-13 12:10:28.683148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.189 Passthru0 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.189 { 00:05:01.189 "name": "Malloc2", 00:05:01.189 "aliases": [ 00:05:01.189 "8426389e-89a2-455a-820d-fdbeea07e8d1" 00:05:01.189 ], 00:05:01.189 "product_name": "Malloc disk", 00:05:01.189 "block_size": 512, 00:05:01.189 "num_blocks": 16384, 00:05:01.189 "uuid": "8426389e-89a2-455a-820d-fdbeea07e8d1", 00:05:01.189 "assigned_rate_limits": { 00:05:01.189 "rw_ios_per_sec": 0, 00:05:01.189 "rw_mbytes_per_sec": 0, 00:05:01.189 "r_mbytes_per_sec": 0, 00:05:01.189 "w_mbytes_per_sec": 0 00:05:01.189 }, 00:05:01.189 "claimed": true, 00:05:01.189 "claim_type": "exclusive_write", 00:05:01.189 "zoned": false, 00:05:01.189 "supported_io_types": { 00:05:01.189 "read": true, 00:05:01.189 "write": true, 00:05:01.189 "unmap": true, 00:05:01.189 "flush": true, 00:05:01.189 "reset": true, 00:05:01.189 "nvme_admin": false, 00:05:01.189 "nvme_io": false, 00:05:01.189 "nvme_io_md": false, 00:05:01.189 "write_zeroes": true, 00:05:01.189 "zcopy": true, 00:05:01.189 "get_zone_info": false, 00:05:01.189 "zone_management": false, 00:05:01.189 "zone_append": false, 00:05:01.189 "compare": false, 00:05:01.189 "compare_and_write": false, 00:05:01.189 "abort": true, 00:05:01.189 "seek_hole": false, 00:05:01.189 "seek_data": false, 00:05:01.189 "copy": true, 00:05:01.189 "nvme_iov_md": false 00:05:01.189 }, 00:05:01.189 "memory_domains": [ 00:05:01.189 { 00:05:01.189 "dma_device_id": "system", 00:05:01.189 "dma_device_type": 1 00:05:01.189 }, 00:05:01.189 { 00:05:01.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.189 "dma_device_type": 2 00:05:01.189 } 00:05:01.189 ], 00:05:01.189 "driver_specific": {} 00:05:01.189 }, 00:05:01.189 { 00:05:01.189 "name": "Passthru0", 00:05:01.189 "aliases": [ 00:05:01.189 "62d67a21-50cf-5421-b2ab-8fe080f4c4ba" 00:05:01.189 ], 00:05:01.189 "product_name": "passthru", 00:05:01.189 "block_size": 512, 00:05:01.189 "num_blocks": 16384, 00:05:01.189 "uuid": "62d67a21-50cf-5421-b2ab-8fe080f4c4ba", 00:05:01.189 "assigned_rate_limits": { 00:05:01.189 "rw_ios_per_sec": 0, 00:05:01.189 "rw_mbytes_per_sec": 0, 00:05:01.189 "r_mbytes_per_sec": 0, 00:05:01.189 "w_mbytes_per_sec": 0 00:05:01.189 }, 00:05:01.189 "claimed": false, 00:05:01.189 "zoned": false, 00:05:01.189 "supported_io_types": { 00:05:01.189 "read": true, 00:05:01.189 "write": true, 00:05:01.189 "unmap": true, 00:05:01.189 "flush": true, 00:05:01.189 "reset": true, 00:05:01.189 "nvme_admin": false, 00:05:01.189 "nvme_io": false, 00:05:01.189 "nvme_io_md": false, 00:05:01.189 "write_zeroes": true, 00:05:01.189 "zcopy": true, 00:05:01.189 "get_zone_info": false, 00:05:01.189 "zone_management": false, 00:05:01.189 "zone_append": false, 00:05:01.189 "compare": false, 00:05:01.189 "compare_and_write": false, 00:05:01.189 "abort": true, 00:05:01.189 "seek_hole": false, 00:05:01.189 "seek_data": false, 00:05:01.189 "copy": true, 00:05:01.189 "nvme_iov_md": false 00:05:01.189 }, 00:05:01.189 "memory_domains": [ 00:05:01.189 { 00:05:01.189 "dma_device_id": "system", 00:05:01.189 "dma_device_type": 1 00:05:01.189 }, 00:05:01.189 { 00:05:01.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.189 "dma_device_type": 2 00:05:01.189 } 00:05:01.189 ], 00:05:01.189 "driver_specific": { 00:05:01.189 "passthru": { 00:05:01.189 "name": "Passthru0", 00:05:01.189 "base_bdev_name": "Malloc2" 00:05:01.189 } 00:05:01.189 } 00:05:01.189 } 00:05:01.189 ]' 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.190 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.190 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.190 12:10:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.190 00:05:01.190 real 0m0.276s 00:05:01.190 user 0m0.180s 00:05:01.190 sys 0m0.029s 00:05:01.190 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.190 12:10:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.190 ************************************ 00:05:01.190 END TEST rpc_daemon_integrity 00:05:01.190 ************************************ 00:05:01.190 12:10:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.190 12:10:28 rpc -- rpc/rpc.sh@84 -- # killprocess 104089 00:05:01.190 12:10:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 104089 ']' 00:05:01.190 12:10:28 rpc -- common/autotest_common.sh@958 -- # kill -0 104089 00:05:01.190 12:10:28 rpc -- common/autotest_common.sh@959 -- # uname 00:05:01.190 12:10:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.190 12:10:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104089 00:05:01.449 12:10:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.449 12:10:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.449 12:10:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104089' 00:05:01.449 killing process with pid 104089 00:05:01.449 12:10:28 rpc -- common/autotest_common.sh@973 -- # kill 104089 00:05:01.449 12:10:28 rpc -- common/autotest_common.sh@978 -- # wait 104089 00:05:01.710 00:05:01.710 real 0m2.035s 00:05:01.710 user 0m2.606s 00:05:01.710 sys 0m0.680s 00:05:01.710 12:10:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.710 12:10:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.710 ************************************ 00:05:01.710 END TEST rpc 00:05:01.710 ************************************ 00:05:01.710 12:10:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.710 12:10:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.710 12:10:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.710 12:10:29 -- common/autotest_common.sh@10 -- # set +x 00:05:01.710 ************************************ 00:05:01.710 START TEST skip_rpc 00:05:01.710 ************************************ 00:05:01.710 12:10:29 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.710 * Looking for test storage... 00:05:01.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.710 12:10:29 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.710 12:10:29 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.710 12:10:29 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.970 12:10:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 --rc geninfo_unexecuted_blocks=1 00:05:01.970 00:05:01.970 ' 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 --rc geninfo_unexecuted_blocks=1 00:05:01.970 00:05:01.970 ' 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 --rc geninfo_unexecuted_blocks=1 00:05:01.970 00:05:01.970 ' 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 --rc geninfo_unexecuted_blocks=1 00:05:01.970 00:05:01.970 ' 00:05:01.970 12:10:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.970 12:10:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.970 12:10:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.970 12:10:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.970 ************************************ 00:05:01.970 START TEST skip_rpc 00:05:01.970 ************************************ 00:05:01.970 12:10:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:01.970 12:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104712 00:05:01.970 12:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.970 12:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.970 12:10:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.970 [2024-12-13 12:10:29.532515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:01.970 [2024-12-13 12:10:29.532551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104712 ] 00:05:01.970 [2024-12-13 12:10:29.604757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.970 [2024-12-13 12:10:29.627007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104712 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104712 ']' 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104712 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104712 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104712' 00:05:07.250 killing process with pid 104712 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104712 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104712 00:05:07.250 00:05:07.250 real 0m5.362s 00:05:07.250 user 0m5.122s 00:05:07.250 sys 0m0.281s 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.250 12:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.250 ************************************ 00:05:07.250 END TEST skip_rpc 00:05:07.250 ************************************ 00:05:07.250 12:10:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.250 12:10:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.250 12:10:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.250 12:10:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.250 ************************************ 00:05:07.250 START TEST skip_rpc_with_json 00:05:07.250 ************************************ 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105634 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105634 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105634 ']' 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.250 12:10:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.510 [2024-12-13 12:10:34.962975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:07.510 [2024-12-13 12:10:34.963016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105634 ] 00:05:07.510 [2024-12-13 12:10:35.034047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.510 [2024-12-13 12:10:35.053713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.770 [2024-12-13 12:10:35.262242] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.770 request: 00:05:07.770 { 00:05:07.770 "trtype": "tcp", 00:05:07.770 "method": "nvmf_get_transports", 00:05:07.770 "req_id": 1 00:05:07.770 } 00:05:07.770 Got JSON-RPC error response 00:05:07.770 response: 00:05:07.770 { 00:05:07.770 "code": -19, 00:05:07.770 "message": "No such device" 00:05:07.770 } 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.770 [2024-12-13 12:10:35.274358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.770 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.770 { 00:05:07.770 "subsystems": [ 00:05:07.770 { 00:05:07.770 "subsystem": "fsdev", 00:05:07.770 "config": [ 00:05:07.770 { 00:05:07.770 "method": "fsdev_set_opts", 00:05:07.770 "params": { 00:05:07.770 "fsdev_io_pool_size": 65535, 00:05:07.770 "fsdev_io_cache_size": 256 00:05:07.770 } 00:05:07.770 } 00:05:07.770 ] 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "vfio_user_target", 00:05:07.770 "config": null 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "keyring", 00:05:07.770 "config": [] 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "iobuf", 00:05:07.770 "config": [ 00:05:07.770 { 00:05:07.770 "method": "iobuf_set_options", 00:05:07.770 "params": { 00:05:07.770 "small_pool_count": 8192, 00:05:07.770 "large_pool_count": 1024, 00:05:07.770 "small_bufsize": 8192, 00:05:07.770 "large_bufsize": 135168, 00:05:07.770 "enable_numa": false 00:05:07.770 } 00:05:07.770 } 00:05:07.770 ] 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "sock", 00:05:07.770 "config": [ 00:05:07.770 { 00:05:07.770 "method": "sock_set_default_impl", 00:05:07.770 "params": { 00:05:07.770 "impl_name": "posix" 00:05:07.770 } 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "method": "sock_impl_set_options", 00:05:07.770 "params": { 00:05:07.770 "impl_name": "ssl", 00:05:07.770 "recv_buf_size": 4096, 00:05:07.770 "send_buf_size": 4096, 00:05:07.770 "enable_recv_pipe": true, 00:05:07.770 "enable_quickack": false, 00:05:07.770 "enable_placement_id": 0, 00:05:07.770 "enable_zerocopy_send_server": true, 00:05:07.770 "enable_zerocopy_send_client": false, 00:05:07.770 "zerocopy_threshold": 0, 00:05:07.770 "tls_version": 0, 00:05:07.770 "enable_ktls": false 00:05:07.770 } 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "method": "sock_impl_set_options", 00:05:07.770 "params": { 00:05:07.770 "impl_name": "posix", 00:05:07.770 "recv_buf_size": 2097152, 00:05:07.770 "send_buf_size": 2097152, 00:05:07.770 "enable_recv_pipe": true, 00:05:07.770 "enable_quickack": false, 00:05:07.770 "enable_placement_id": 0, 00:05:07.770 "enable_zerocopy_send_server": true, 00:05:07.770 "enable_zerocopy_send_client": false, 00:05:07.770 "zerocopy_threshold": 0, 00:05:07.770 "tls_version": 0, 00:05:07.770 "enable_ktls": false 00:05:07.770 } 00:05:07.770 } 00:05:07.770 ] 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "vmd", 00:05:07.770 "config": [] 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "accel", 00:05:07.770 "config": [ 00:05:07.770 { 00:05:07.770 "method": "accel_set_options", 00:05:07.770 "params": { 00:05:07.770 "small_cache_size": 128, 00:05:07.770 "large_cache_size": 16, 00:05:07.770 "task_count": 2048, 00:05:07.770 "sequence_count": 2048, 00:05:07.770 "buf_count": 2048 00:05:07.770 } 00:05:07.770 } 00:05:07.770 ] 00:05:07.770 }, 00:05:07.770 { 00:05:07.770 "subsystem": "bdev", 00:05:07.770 "config": [ 00:05:07.770 { 00:05:07.770 "method": "bdev_set_options", 00:05:07.770 "params": { 00:05:07.770 "bdev_io_pool_size": 65535, 00:05:07.770 "bdev_io_cache_size": 256, 00:05:07.771 "bdev_auto_examine": true, 00:05:07.771 "iobuf_small_cache_size": 128, 00:05:07.771 "iobuf_large_cache_size": 16 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "bdev_raid_set_options", 00:05:07.771 "params": { 00:05:07.771 "process_window_size_kb": 1024, 00:05:07.771 "process_max_bandwidth_mb_sec": 0 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "bdev_iscsi_set_options", 00:05:07.771 "params": { 00:05:07.771 "timeout_sec": 30 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "bdev_nvme_set_options", 00:05:07.771 "params": { 00:05:07.771 "action_on_timeout": "none", 00:05:07.771 "timeout_us": 0, 00:05:07.771 "timeout_admin_us": 0, 00:05:07.771 "keep_alive_timeout_ms": 10000, 00:05:07.771 "arbitration_burst": 0, 00:05:07.771 "low_priority_weight": 0, 00:05:07.771 "medium_priority_weight": 0, 00:05:07.771 "high_priority_weight": 0, 00:05:07.771 "nvme_adminq_poll_period_us": 10000, 00:05:07.771 "nvme_ioq_poll_period_us": 0, 00:05:07.771 "io_queue_requests": 0, 00:05:07.771 "delay_cmd_submit": true, 00:05:07.771 "transport_retry_count": 4, 00:05:07.771 "bdev_retry_count": 3, 00:05:07.771 "transport_ack_timeout": 0, 00:05:07.771 "ctrlr_loss_timeout_sec": 0, 00:05:07.771 "reconnect_delay_sec": 0, 00:05:07.771 "fast_io_fail_timeout_sec": 0, 00:05:07.771 "disable_auto_failback": false, 00:05:07.771 "generate_uuids": false, 00:05:07.771 "transport_tos": 0, 00:05:07.771 "nvme_error_stat": false, 00:05:07.771 "rdma_srq_size": 0, 00:05:07.771 "io_path_stat": false, 00:05:07.771 "allow_accel_sequence": false, 00:05:07.771 "rdma_max_cq_size": 0, 00:05:07.771 "rdma_cm_event_timeout_ms": 0, 00:05:07.771 "dhchap_digests": [ 00:05:07.771 "sha256", 00:05:07.771 "sha384", 00:05:07.771 "sha512" 00:05:07.771 ], 00:05:07.771 "dhchap_dhgroups": [ 00:05:07.771 "null", 00:05:07.771 "ffdhe2048", 00:05:07.771 "ffdhe3072", 00:05:07.771 "ffdhe4096", 00:05:07.771 "ffdhe6144", 00:05:07.771 "ffdhe8192" 00:05:07.771 ], 00:05:07.771 "rdma_umr_per_io": false 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "bdev_nvme_set_hotplug", 00:05:07.771 "params": { 00:05:07.771 "period_us": 100000, 00:05:07.771 "enable": false 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "bdev_wait_for_examine" 00:05:07.771 } 00:05:07.771 ] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "scsi", 00:05:07.771 "config": null 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "scheduler", 00:05:07.771 "config": [ 00:05:07.771 { 00:05:07.771 "method": "framework_set_scheduler", 00:05:07.771 "params": { 00:05:07.771 "name": "static" 00:05:07.771 } 00:05:07.771 } 00:05:07.771 ] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "vhost_scsi", 00:05:07.771 "config": [] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "vhost_blk", 00:05:07.771 "config": [] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "ublk", 00:05:07.771 "config": [] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "nbd", 00:05:07.771 "config": [] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "nvmf", 00:05:07.771 "config": [ 00:05:07.771 { 00:05:07.771 "method": "nvmf_set_config", 00:05:07.771 "params": { 00:05:07.771 "discovery_filter": "match_any", 00:05:07.771 "admin_cmd_passthru": { 00:05:07.771 "identify_ctrlr": false 00:05:07.771 }, 00:05:07.771 "dhchap_digests": [ 00:05:07.771 "sha256", 00:05:07.771 "sha384", 00:05:07.771 "sha512" 00:05:07.771 ], 00:05:07.771 "dhchap_dhgroups": [ 00:05:07.771 "null", 00:05:07.771 "ffdhe2048", 00:05:07.771 "ffdhe3072", 00:05:07.771 "ffdhe4096", 00:05:07.771 "ffdhe6144", 00:05:07.771 "ffdhe8192" 00:05:07.771 ] 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "nvmf_set_max_subsystems", 00:05:07.771 "params": { 00:05:07.771 "max_subsystems": 1024 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "nvmf_set_crdt", 00:05:07.771 "params": { 00:05:07.771 "crdt1": 0, 00:05:07.771 "crdt2": 0, 00:05:07.771 "crdt3": 0 00:05:07.771 } 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "method": "nvmf_create_transport", 00:05:07.771 "params": { 00:05:07.771 "trtype": "TCP", 00:05:07.771 "max_queue_depth": 128, 00:05:07.771 "max_io_qpairs_per_ctrlr": 127, 00:05:07.771 "in_capsule_data_size": 4096, 00:05:07.771 "max_io_size": 131072, 00:05:07.771 "io_unit_size": 131072, 00:05:07.771 "max_aq_depth": 128, 00:05:07.771 "num_shared_buffers": 511, 00:05:07.771 "buf_cache_size": 4294967295, 00:05:07.771 "dif_insert_or_strip": false, 00:05:07.771 "zcopy": false, 00:05:07.771 "c2h_success": true, 00:05:07.771 "sock_priority": 0, 00:05:07.771 "abort_timeout_sec": 1, 00:05:07.771 "ack_timeout": 0, 00:05:07.771 "data_wr_pool_size": 0 00:05:07.771 } 00:05:07.771 } 00:05:07.771 ] 00:05:07.771 }, 00:05:07.771 { 00:05:07.771 "subsystem": "iscsi", 00:05:07.771 "config": [ 00:05:07.771 { 00:05:07.771 "method": "iscsi_set_options", 00:05:07.771 "params": { 00:05:07.771 "node_base": "iqn.2016-06.io.spdk", 00:05:07.771 "max_sessions": 128, 00:05:07.771 "max_connections_per_session": 2, 00:05:07.771 "max_queue_depth": 64, 00:05:07.771 "default_time2wait": 2, 00:05:07.771 "default_time2retain": 20, 00:05:07.771 "first_burst_length": 8192, 00:05:07.771 "immediate_data": true, 00:05:07.771 "allow_duplicated_isid": false, 00:05:07.771 "error_recovery_level": 0, 00:05:07.771 "nop_timeout": 60, 00:05:07.771 "nop_in_interval": 30, 00:05:07.771 "disable_chap": false, 00:05:07.771 "require_chap": false, 00:05:07.771 "mutual_chap": false, 00:05:07.771 "chap_group": 0, 00:05:07.771 "max_large_datain_per_connection": 64, 00:05:07.771 "max_r2t_per_connection": 4, 00:05:07.771 "pdu_pool_size": 36864, 00:05:07.771 "immediate_data_pool_size": 16384, 00:05:07.771 "data_out_pool_size": 2048 00:05:07.771 } 00:05:07.771 } 00:05:07.771 ] 00:05:07.771 } 00:05:07.771 ] 00:05:07.771 } 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105634 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105634 ']' 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105634 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.771 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105634 00:05:08.031 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.031 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.031 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105634' 00:05:08.031 killing process with pid 105634 00:05:08.031 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105634 00:05:08.031 12:10:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105634 00:05:08.291 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105738 00:05:08.291 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.291 12:10:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105738 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105738 ']' 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105738 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105738 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105738' 00:05:13.569 killing process with pid 105738 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105738 00:05:13.569 12:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105738 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.569 00:05:13.569 real 0m6.240s 00:05:13.569 user 0m5.936s 00:05:13.569 sys 0m0.609s 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.569 ************************************ 00:05:13.569 END TEST skip_rpc_with_json 00:05:13.569 ************************************ 00:05:13.569 12:10:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:13.569 12:10:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.569 12:10:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.569 12:10:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.569 ************************************ 00:05:13.569 START TEST skip_rpc_with_delay 00:05:13.569 ************************************ 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.569 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.829 [2024-12-13 12:10:41.278820] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:13.829 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:13.829 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.829 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.829 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.829 00:05:13.829 real 0m0.069s 00:05:13.829 user 0m0.047s 00:05:13.829 sys 0m0.021s 00:05:13.829 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.829 12:10:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:13.829 ************************************ 00:05:13.829 END TEST skip_rpc_with_delay 00:05:13.829 ************************************ 00:05:13.829 12:10:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:13.829 12:10:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:13.829 12:10:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:13.829 12:10:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.829 12:10:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.829 12:10:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.829 ************************************ 00:05:13.829 START TEST exit_on_failed_rpc_init 00:05:13.829 ************************************ 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=106776 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 106776 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 106776 ']' 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.829 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.829 [2024-12-13 12:10:41.417196] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:13.829 [2024-12-13 12:10:41.417236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106776 ] 00:05:13.829 [2024-12-13 12:10:41.491398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.829 [2024-12-13 12:10:41.513703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.090 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.090 [2024-12-13 12:10:41.777613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:14.090 [2024-12-13 12:10:41.777652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106820 ] 00:05:14.350 [2024-12-13 12:10:41.849177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.350 [2024-12-13 12:10:41.871264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.350 [2024-12-13 12:10:41.871321] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.350 [2024-12-13 12:10:41.871330] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.350 [2024-12-13 12:10:41.871336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 106776 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 106776 ']' 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 106776 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106776 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106776' 00:05:14.350 killing process with pid 106776 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 106776 00:05:14.350 12:10:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 106776 00:05:14.611 00:05:14.611 real 0m0.892s 00:05:14.611 user 0m0.925s 00:05:14.611 sys 0m0.388s 00:05:14.611 12:10:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.611 12:10:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.611 ************************************ 00:05:14.611 END TEST exit_on_failed_rpc_init 00:05:14.611 ************************************ 00:05:14.611 12:10:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.611 00:05:14.611 real 0m13.025s 00:05:14.611 user 0m12.239s 00:05:14.611 sys 0m1.580s 00:05:14.611 12:10:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.611 12:10:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.611 ************************************ 00:05:14.611 END TEST skip_rpc 00:05:14.611 ************************************ 00:05:14.871 12:10:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.871 12:10:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.871 12:10:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.871 12:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.871 ************************************ 00:05:14.871 START TEST rpc_client 00:05:14.871 ************************************ 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.871 * Looking for test storage... 00:05:14.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.871 12:10:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.871 --rc genhtml_branch_coverage=1 00:05:14.871 --rc genhtml_function_coverage=1 00:05:14.871 --rc genhtml_legend=1 00:05:14.871 --rc geninfo_all_blocks=1 00:05:14.871 --rc geninfo_unexecuted_blocks=1 00:05:14.871 00:05:14.871 ' 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.871 --rc genhtml_branch_coverage=1 00:05:14.871 --rc genhtml_function_coverage=1 00:05:14.871 --rc genhtml_legend=1 00:05:14.871 --rc geninfo_all_blocks=1 00:05:14.871 --rc geninfo_unexecuted_blocks=1 00:05:14.871 00:05:14.871 ' 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.871 --rc genhtml_branch_coverage=1 00:05:14.871 --rc genhtml_function_coverage=1 00:05:14.871 --rc genhtml_legend=1 00:05:14.871 --rc geninfo_all_blocks=1 00:05:14.871 --rc geninfo_unexecuted_blocks=1 00:05:14.871 00:05:14.871 ' 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.871 --rc genhtml_branch_coverage=1 00:05:14.871 --rc genhtml_function_coverage=1 00:05:14.871 --rc genhtml_legend=1 00:05:14.871 --rc geninfo_all_blocks=1 00:05:14.871 --rc geninfo_unexecuted_blocks=1 00:05:14.871 00:05:14.871 ' 00:05:14.871 12:10:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:14.871 OK 00:05:14.871 12:10:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.871 00:05:14.871 real 0m0.194s 00:05:14.871 user 0m0.116s 00:05:14.871 sys 0m0.089s 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.871 12:10:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.871 ************************************ 00:05:14.871 END TEST rpc_client 00:05:14.871 ************************************ 00:05:15.132 12:10:42 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.132 12:10:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.132 12:10:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.132 12:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:15.132 ************************************ 00:05:15.132 START TEST json_config 00:05:15.132 ************************************ 00:05:15.132 12:10:42 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.132 12:10:42 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.132 12:10:42 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.132 12:10:42 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.132 12:10:42 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.132 12:10:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.132 12:10:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.132 12:10:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.132 12:10:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.132 12:10:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.132 12:10:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.132 12:10:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.132 12:10:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.132 12:10:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.132 12:10:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.132 12:10:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.132 12:10:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:15.132 12:10:42 json_config -- scripts/common.sh@345 -- # : 1 00:05:15.132 12:10:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.132 12:10:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.132 12:10:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:15.132 12:10:42 json_config -- scripts/common.sh@353 -- # local d=1 00:05:15.132 12:10:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.133 12:10:42 json_config -- scripts/common.sh@355 -- # echo 1 00:05:15.133 12:10:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.133 12:10:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:15.133 12:10:42 json_config -- scripts/common.sh@353 -- # local d=2 00:05:15.133 12:10:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.133 12:10:42 json_config -- scripts/common.sh@355 -- # echo 2 00:05:15.133 12:10:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.133 12:10:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.133 12:10:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.133 12:10:42 json_config -- scripts/common.sh@368 -- # return 0 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.133 --rc genhtml_branch_coverage=1 00:05:15.133 --rc genhtml_function_coverage=1 00:05:15.133 --rc genhtml_legend=1 00:05:15.133 --rc geninfo_all_blocks=1 00:05:15.133 --rc geninfo_unexecuted_blocks=1 00:05:15.133 00:05:15.133 ' 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.133 --rc genhtml_branch_coverage=1 00:05:15.133 --rc genhtml_function_coverage=1 00:05:15.133 --rc genhtml_legend=1 00:05:15.133 --rc geninfo_all_blocks=1 00:05:15.133 --rc geninfo_unexecuted_blocks=1 00:05:15.133 00:05:15.133 ' 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.133 --rc genhtml_branch_coverage=1 00:05:15.133 --rc genhtml_function_coverage=1 00:05:15.133 --rc genhtml_legend=1 00:05:15.133 --rc geninfo_all_blocks=1 00:05:15.133 --rc geninfo_unexecuted_blocks=1 00:05:15.133 00:05:15.133 ' 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.133 --rc genhtml_branch_coverage=1 00:05:15.133 --rc genhtml_function_coverage=1 00:05:15.133 --rc genhtml_legend=1 00:05:15.133 --rc geninfo_all_blocks=1 00:05:15.133 --rc geninfo_unexecuted_blocks=1 00:05:15.133 00:05:15.133 ' 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.133 12:10:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.133 12:10:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.133 12:10:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.133 12:10:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.133 12:10:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.133 12:10:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.133 12:10:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.133 12:10:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:15.133 12:10:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@51 -- # : 0 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.133 12:10:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:15.133 INFO: JSON configuration test init 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.133 12:10:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.133 12:10:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.133 12:10:42 json_config -- json_config/common.sh@10 -- # shift 00:05:15.133 12:10:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.133 12:10:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.133 12:10:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.133 12:10:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.133 12:10:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.133 12:10:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107166 00:05:15.133 12:10:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.133 Waiting for target to run... 00:05:15.133 12:10:42 json_config -- json_config/common.sh@25 -- # waitforlisten 107166 /var/tmp/spdk_tgt.sock 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 107166 ']' 00:05:15.133 12:10:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.133 12:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.393 [2024-12-13 12:10:42.873983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:15.393 [2024-12-13 12:10:42.874034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107166 ] 00:05:15.653 [2024-12-13 12:10:43.328607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.653 [2024-12-13 12:10:43.351170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.222 12:10:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.222 12:10:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.222 12:10:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.222 00:05:16.222 12:10:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:16.222 12:10:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:16.222 12:10:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.222 12:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 12:10:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:16.222 12:10:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:16.222 12:10:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.222 12:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 12:10:43 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.222 12:10:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:16.222 12:10:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:19.523 12:10:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.523 12:10:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:19.523 12:10:46 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:19.523 12:10:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@54 -- # sort 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:19.523 12:10:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.523 12:10:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:19.523 12:10:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.523 12:10:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:19.523 12:10:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.523 12:10:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.782 MallocForNvmf0 00:05:19.782 12:10:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.782 12:10:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.782 MallocForNvmf1 00:05:19.782 12:10:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.782 12:10:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.041 [2024-12-13 12:10:47.617850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.041 12:10:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.041 12:10:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.300 12:10:47 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.300 12:10:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.560 12:10:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.560 12:10:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.560 12:10:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.560 12:10:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.818 [2024-12-13 12:10:48.384166] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.819 12:10:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:20.819 12:10:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.819 12:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 12:10:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:20.819 12:10:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.819 12:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 12:10:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:20.819 12:10:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.819 12:10:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.078 MallocBdevForConfigChangeCheck 00:05:21.078 12:10:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:21.078 12:10:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.078 12:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.078 12:10:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:21.078 12:10:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.337 12:10:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:21.337 INFO: shutting down applications... 00:05:21.337 12:10:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:21.337 12:10:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:21.337 12:10:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:21.337 12:10:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:23.242 Calling clear_iscsi_subsystem 00:05:23.242 Calling clear_nvmf_subsystem 00:05:23.242 Calling clear_nbd_subsystem 00:05:23.242 Calling clear_ublk_subsystem 00:05:23.242 Calling clear_vhost_blk_subsystem 00:05:23.242 Calling clear_vhost_scsi_subsystem 00:05:23.242 Calling clear_bdev_subsystem 00:05:23.242 12:10:50 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:23.242 12:10:50 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:23.242 12:10:50 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:23.242 12:10:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.242 12:10:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:23.242 12:10:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:23.502 12:10:51 json_config -- json_config/json_config.sh@352 -- # break 00:05:23.502 12:10:51 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:23.502 12:10:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:23.502 12:10:51 json_config -- json_config/common.sh@31 -- # local app=target 00:05:23.502 12:10:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.502 12:10:51 json_config -- json_config/common.sh@35 -- # [[ -n 107166 ]] 00:05:23.502 12:10:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107166 00:05:23.502 12:10:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.502 12:10:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.502 12:10:51 json_config -- json_config/common.sh@41 -- # kill -0 107166 00:05:23.502 12:10:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.070 12:10:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.070 12:10:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.070 12:10:51 json_config -- json_config/common.sh@41 -- # kill -0 107166 00:05:24.070 12:10:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.070 12:10:51 json_config -- json_config/common.sh@43 -- # break 00:05:24.070 12:10:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.070 12:10:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.070 SPDK target shutdown done 00:05:24.070 12:10:51 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:24.070 INFO: relaunching applications... 00:05:24.070 12:10:51 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.070 12:10:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.070 12:10:51 json_config -- json_config/common.sh@10 -- # shift 00:05:24.070 12:10:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.070 12:10:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.070 12:10:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.070 12:10:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.070 12:10:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.070 12:10:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108646 00:05:24.070 12:10:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.070 Waiting for target to run... 00:05:24.070 12:10:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.070 12:10:51 json_config -- json_config/common.sh@25 -- # waitforlisten 108646 /var/tmp/spdk_tgt.sock 00:05:24.070 12:10:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 108646 ']' 00:05:24.070 12:10:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.070 12:10:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.070 12:10:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.070 12:10:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.070 12:10:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.070 [2024-12-13 12:10:51.574835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:24.070 [2024-12-13 12:10:51.574889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108646 ] 00:05:24.639 [2024-12-13 12:10:52.035183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.639 [2024-12-13 12:10:52.054495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.943 [2024-12-13 12:10:55.059579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.943 [2024-12-13 12:10:55.091842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:28.203 12:10:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.203 12:10:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:28.203 12:10:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.203 00:05:28.203 12:10:55 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:28.203 12:10:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:28.203 INFO: Checking if target configuration is the same... 00:05:28.203 12:10:55 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.203 12:10:55 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:28.203 12:10:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.203 + '[' 2 -ne 2 ']' 00:05:28.203 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.203 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:28.203 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:28.203 +++ basename /dev/fd/62 00:05:28.203 ++ mktemp /tmp/62.XXX 00:05:28.203 + tmp_file_1=/tmp/62.hJa 00:05:28.203 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.203 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.203 + tmp_file_2=/tmp/spdk_tgt_config.json.xPs 00:05:28.203 + ret=0 00:05:28.203 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.723 + diff -u /tmp/62.hJa /tmp/spdk_tgt_config.json.xPs 00:05:28.723 + echo 'INFO: JSON config files are the same' 00:05:28.723 INFO: JSON config files are the same 00:05:28.723 + rm /tmp/62.hJa /tmp/spdk_tgt_config.json.xPs 00:05:28.723 + exit 0 00:05:28.723 12:10:56 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:28.723 12:10:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:28.723 INFO: changing configuration and checking if this can be detected... 00:05:28.723 12:10:56 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.723 12:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.723 12:10:56 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.723 12:10:56 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:28.723 12:10:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.723 + '[' 2 -ne 2 ']' 00:05:28.723 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.723 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:28.723 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:28.723 +++ basename /dev/fd/62 00:05:28.723 ++ mktemp /tmp/62.XXX 00:05:28.723 + tmp_file_1=/tmp/62.4OY 00:05:28.982 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.982 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.982 + tmp_file_2=/tmp/spdk_tgt_config.json.umy 00:05:28.982 + ret=0 00:05:28.982 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.243 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.243 + diff -u /tmp/62.4OY /tmp/spdk_tgt_config.json.umy 00:05:29.243 + ret=1 00:05:29.243 + echo '=== Start of file: /tmp/62.4OY ===' 00:05:29.243 + cat /tmp/62.4OY 00:05:29.243 + echo '=== End of file: /tmp/62.4OY ===' 00:05:29.243 + echo '' 00:05:29.243 + echo '=== Start of file: /tmp/spdk_tgt_config.json.umy ===' 00:05:29.243 + cat /tmp/spdk_tgt_config.json.umy 00:05:29.243 + echo '=== End of file: /tmp/spdk_tgt_config.json.umy ===' 00:05:29.243 + echo '' 00:05:29.243 + rm /tmp/62.4OY /tmp/spdk_tgt_config.json.umy 00:05:29.243 + exit 1 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:29.243 INFO: configuration change detected. 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@324 -- # [[ -n 108646 ]] 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.243 12:10:56 json_config -- json_config/json_config.sh@330 -- # killprocess 108646 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@954 -- # '[' -z 108646 ']' 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@958 -- # kill -0 108646 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@959 -- # uname 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108646 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108646' 00:05:29.243 killing process with pid 108646 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@973 -- # kill 108646 00:05:29.243 12:10:56 json_config -- common/autotest_common.sh@978 -- # wait 108646 00:05:31.152 12:10:58 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.152 12:10:58 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:31.152 12:10:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.152 12:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.153 12:10:58 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:31.153 12:10:58 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:31.153 INFO: Success 00:05:31.153 00:05:31.153 real 0m15.766s 00:05:31.153 user 0m16.814s 00:05:31.153 sys 0m2.130s 00:05:31.153 12:10:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.153 12:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.153 ************************************ 00:05:31.153 END TEST json_config 00:05:31.153 ************************************ 00:05:31.153 12:10:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.153 12:10:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.153 12:10:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.153 12:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.153 ************************************ 00:05:31.153 START TEST json_config_extra_key 00:05:31.153 ************************************ 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.153 --rc genhtml_branch_coverage=1 00:05:31.153 --rc genhtml_function_coverage=1 00:05:31.153 --rc genhtml_legend=1 00:05:31.153 --rc geninfo_all_blocks=1 00:05:31.153 --rc geninfo_unexecuted_blocks=1 00:05:31.153 00:05:31.153 ' 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.153 --rc genhtml_branch_coverage=1 00:05:31.153 --rc genhtml_function_coverage=1 00:05:31.153 --rc genhtml_legend=1 00:05:31.153 --rc geninfo_all_blocks=1 00:05:31.153 --rc geninfo_unexecuted_blocks=1 00:05:31.153 00:05:31.153 ' 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.153 --rc genhtml_branch_coverage=1 00:05:31.153 --rc genhtml_function_coverage=1 00:05:31.153 --rc genhtml_legend=1 00:05:31.153 --rc geninfo_all_blocks=1 00:05:31.153 --rc geninfo_unexecuted_blocks=1 00:05:31.153 00:05:31.153 ' 00:05:31.153 12:10:58 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.153 --rc genhtml_branch_coverage=1 00:05:31.153 --rc genhtml_function_coverage=1 00:05:31.153 --rc genhtml_legend=1 00:05:31.153 --rc geninfo_all_blocks=1 00:05:31.153 --rc geninfo_unexecuted_blocks=1 00:05:31.153 00:05:31.153 ' 00:05:31.153 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.153 12:10:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.153 12:10:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.153 12:10:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.153 12:10:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.153 12:10:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.153 12:10:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.153 12:10:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.154 12:10:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.154 INFO: launching applications... 00:05:31.154 12:10:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=109897 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.154 Waiting for target to run... 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 109897 /var/tmp/spdk_tgt.sock 00:05:31.154 12:10:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 109897 ']' 00:05:31.154 12:10:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.154 12:10:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.154 12:10:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.154 12:10:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.154 12:10:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.154 12:10:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.154 [2024-12-13 12:10:58.705168] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:31.154 [2024-12-13 12:10:58.705216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109897 ] 00:05:31.724 [2024-12-13 12:10:59.158264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.724 [2024-12-13 12:10:59.180287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.984 12:10:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.984 12:10:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.984 00:05:31.984 12:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.984 INFO: shutting down applications... 00:05:31.984 12:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 109897 ]] 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 109897 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109897 00:05:31.984 12:10:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109897 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.555 12:11:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.555 SPDK target shutdown done 00:05:32.555 12:11:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:32.555 Success 00:05:32.555 00:05:32.555 real 0m1.583s 00:05:32.555 user 0m1.207s 00:05:32.555 sys 0m0.564s 00:05:32.555 12:11:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.555 12:11:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.555 ************************************ 00:05:32.555 END TEST json_config_extra_key 00:05:32.555 ************************************ 00:05:32.555 12:11:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.555 12:11:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.555 12:11:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.555 12:11:00 -- common/autotest_common.sh@10 -- # set +x 00:05:32.555 ************************************ 00:05:32.555 START TEST alias_rpc 00:05:32.555 ************************************ 00:05:32.555 12:11:00 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.555 * Looking for test storage... 00:05:32.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:32.555 12:11:00 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.555 12:11:00 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.555 12:11:00 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.816 12:11:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.816 --rc genhtml_branch_coverage=1 00:05:32.816 --rc genhtml_function_coverage=1 00:05:32.816 --rc genhtml_legend=1 00:05:32.816 --rc geninfo_all_blocks=1 00:05:32.816 --rc geninfo_unexecuted_blocks=1 00:05:32.816 00:05:32.816 ' 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.816 --rc genhtml_branch_coverage=1 00:05:32.816 --rc genhtml_function_coverage=1 00:05:32.816 --rc genhtml_legend=1 00:05:32.816 --rc geninfo_all_blocks=1 00:05:32.816 --rc geninfo_unexecuted_blocks=1 00:05:32.816 00:05:32.816 ' 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.816 --rc genhtml_branch_coverage=1 00:05:32.816 --rc genhtml_function_coverage=1 00:05:32.816 --rc genhtml_legend=1 00:05:32.816 --rc geninfo_all_blocks=1 00:05:32.816 --rc geninfo_unexecuted_blocks=1 00:05:32.816 00:05:32.816 ' 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.816 --rc genhtml_branch_coverage=1 00:05:32.816 --rc genhtml_function_coverage=1 00:05:32.816 --rc genhtml_legend=1 00:05:32.816 --rc geninfo_all_blocks=1 00:05:32.816 --rc geninfo_unexecuted_blocks=1 00:05:32.816 00:05:32.816 ' 00:05:32.816 12:11:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.816 12:11:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110329 00:05:32.816 12:11:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.816 12:11:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110329 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 110329 ']' 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.816 12:11:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.816 [2024-12-13 12:11:00.351935] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:32.816 [2024-12-13 12:11:00.351986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110329 ] 00:05:32.816 [2024-12-13 12:11:00.423789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.816 [2024-12-13 12:11:00.447025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.076 12:11:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.076 12:11:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.076 12:11:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:33.337 12:11:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110329 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 110329 ']' 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 110329 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110329 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110329' 00:05:33.337 killing process with pid 110329 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 110329 00:05:33.337 12:11:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 110329 00:05:33.597 00:05:33.597 real 0m1.092s 00:05:33.597 user 0m1.129s 00:05:33.597 sys 0m0.395s 00:05:33.597 12:11:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.597 12:11:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.597 ************************************ 00:05:33.597 END TEST alias_rpc 00:05:33.597 ************************************ 00:05:33.597 12:11:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:33.597 12:11:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.597 12:11:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.597 12:11:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.597 12:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.597 ************************************ 00:05:33.597 START TEST spdkcli_tcp 00:05:33.597 ************************************ 00:05:33.597 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.857 * Looking for test storage... 00:05:33.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.857 12:11:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.857 --rc genhtml_branch_coverage=1 00:05:33.857 --rc genhtml_function_coverage=1 00:05:33.857 --rc genhtml_legend=1 00:05:33.857 --rc geninfo_all_blocks=1 00:05:33.857 --rc geninfo_unexecuted_blocks=1 00:05:33.857 00:05:33.857 ' 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.857 --rc genhtml_branch_coverage=1 00:05:33.857 --rc genhtml_function_coverage=1 00:05:33.857 --rc genhtml_legend=1 00:05:33.857 --rc geninfo_all_blocks=1 00:05:33.857 --rc geninfo_unexecuted_blocks=1 00:05:33.857 00:05:33.857 ' 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.857 --rc genhtml_branch_coverage=1 00:05:33.857 --rc genhtml_function_coverage=1 00:05:33.857 --rc genhtml_legend=1 00:05:33.857 --rc geninfo_all_blocks=1 00:05:33.857 --rc geninfo_unexecuted_blocks=1 00:05:33.857 00:05:33.857 ' 00:05:33.857 12:11:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.858 --rc genhtml_branch_coverage=1 00:05:33.858 --rc genhtml_function_coverage=1 00:05:33.858 --rc genhtml_legend=1 00:05:33.858 --rc geninfo_all_blocks=1 00:05:33.858 --rc geninfo_unexecuted_blocks=1 00:05:33.858 00:05:33.858 ' 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110490 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 110490 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 110490 ']' 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.858 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.858 12:11:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.858 [2024-12-13 12:11:01.509775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.858 [2024-12-13 12:11:01.509829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110490 ] 00:05:34.118 [2024-12-13 12:11:01.583787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.118 [2024-12-13 12:11:01.607566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.118 [2024-12-13 12:11:01.607568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.118 12:11:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.118 12:11:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:34.118 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:34.118 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=110678 00:05:34.118 12:11:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:34.379 [ 00:05:34.379 "bdev_malloc_delete", 00:05:34.379 "bdev_malloc_create", 00:05:34.379 "bdev_null_resize", 00:05:34.379 "bdev_null_delete", 00:05:34.379 "bdev_null_create", 00:05:34.379 "bdev_nvme_cuse_unregister", 00:05:34.379 "bdev_nvme_cuse_register", 00:05:34.379 "bdev_opal_new_user", 00:05:34.379 "bdev_opal_set_lock_state", 00:05:34.379 "bdev_opal_delete", 00:05:34.379 "bdev_opal_get_info", 00:05:34.379 "bdev_opal_create", 00:05:34.379 "bdev_nvme_opal_revert", 00:05:34.379 "bdev_nvme_opal_init", 00:05:34.379 "bdev_nvme_send_cmd", 00:05:34.379 "bdev_nvme_set_keys", 00:05:34.379 "bdev_nvme_get_path_iostat", 00:05:34.379 "bdev_nvme_get_mdns_discovery_info", 00:05:34.379 "bdev_nvme_stop_mdns_discovery", 00:05:34.379 "bdev_nvme_start_mdns_discovery", 00:05:34.379 "bdev_nvme_set_multipath_policy", 00:05:34.379 "bdev_nvme_set_preferred_path", 00:05:34.379 "bdev_nvme_get_io_paths", 00:05:34.379 "bdev_nvme_remove_error_injection", 00:05:34.379 "bdev_nvme_add_error_injection", 00:05:34.379 "bdev_nvme_get_discovery_info", 00:05:34.379 "bdev_nvme_stop_discovery", 00:05:34.379 "bdev_nvme_start_discovery", 00:05:34.379 "bdev_nvme_get_controller_health_info", 00:05:34.379 "bdev_nvme_disable_controller", 00:05:34.379 "bdev_nvme_enable_controller", 00:05:34.379 "bdev_nvme_reset_controller", 00:05:34.379 "bdev_nvme_get_transport_statistics", 00:05:34.379 "bdev_nvme_apply_firmware", 00:05:34.379 "bdev_nvme_detach_controller", 00:05:34.379 "bdev_nvme_get_controllers", 00:05:34.379 "bdev_nvme_attach_controller", 00:05:34.379 "bdev_nvme_set_hotplug", 00:05:34.379 "bdev_nvme_set_options", 00:05:34.379 "bdev_passthru_delete", 00:05:34.379 "bdev_passthru_create", 00:05:34.379 "bdev_lvol_set_parent_bdev", 00:05:34.379 "bdev_lvol_set_parent", 00:05:34.379 "bdev_lvol_check_shallow_copy", 00:05:34.379 "bdev_lvol_start_shallow_copy", 00:05:34.379 "bdev_lvol_grow_lvstore", 00:05:34.379 "bdev_lvol_get_lvols", 00:05:34.379 "bdev_lvol_get_lvstores", 00:05:34.379 "bdev_lvol_delete", 00:05:34.379 "bdev_lvol_set_read_only", 00:05:34.379 "bdev_lvol_resize", 00:05:34.379 "bdev_lvol_decouple_parent", 00:05:34.379 "bdev_lvol_inflate", 00:05:34.379 "bdev_lvol_rename", 00:05:34.379 "bdev_lvol_clone_bdev", 00:05:34.379 "bdev_lvol_clone", 00:05:34.379 "bdev_lvol_snapshot", 00:05:34.379 "bdev_lvol_create", 00:05:34.379 "bdev_lvol_delete_lvstore", 00:05:34.379 "bdev_lvol_rename_lvstore", 00:05:34.379 "bdev_lvol_create_lvstore", 00:05:34.379 "bdev_raid_set_options", 00:05:34.379 "bdev_raid_remove_base_bdev", 00:05:34.379 "bdev_raid_add_base_bdev", 00:05:34.379 "bdev_raid_delete", 00:05:34.379 "bdev_raid_create", 00:05:34.379 "bdev_raid_get_bdevs", 00:05:34.379 "bdev_error_inject_error", 00:05:34.379 "bdev_error_delete", 00:05:34.379 "bdev_error_create", 00:05:34.379 "bdev_split_delete", 00:05:34.379 "bdev_split_create", 00:05:34.379 "bdev_delay_delete", 00:05:34.379 "bdev_delay_create", 00:05:34.379 "bdev_delay_update_latency", 00:05:34.379 "bdev_zone_block_delete", 00:05:34.379 "bdev_zone_block_create", 00:05:34.379 "blobfs_create", 00:05:34.379 "blobfs_detect", 00:05:34.379 "blobfs_set_cache_size", 00:05:34.379 "bdev_aio_delete", 00:05:34.379 "bdev_aio_rescan", 00:05:34.379 "bdev_aio_create", 00:05:34.379 "bdev_ftl_set_property", 00:05:34.379 "bdev_ftl_get_properties", 00:05:34.379 "bdev_ftl_get_stats", 00:05:34.379 "bdev_ftl_unmap", 00:05:34.379 "bdev_ftl_unload", 00:05:34.379 "bdev_ftl_delete", 00:05:34.379 "bdev_ftl_load", 00:05:34.379 "bdev_ftl_create", 00:05:34.379 "bdev_virtio_attach_controller", 00:05:34.379 "bdev_virtio_scsi_get_devices", 00:05:34.379 "bdev_virtio_detach_controller", 00:05:34.379 "bdev_virtio_blk_set_hotplug", 00:05:34.379 "bdev_iscsi_delete", 00:05:34.379 "bdev_iscsi_create", 00:05:34.379 "bdev_iscsi_set_options", 00:05:34.379 "accel_error_inject_error", 00:05:34.379 "ioat_scan_accel_module", 00:05:34.379 "dsa_scan_accel_module", 00:05:34.379 "iaa_scan_accel_module", 00:05:34.379 "vfu_virtio_create_fs_endpoint", 00:05:34.379 "vfu_virtio_create_scsi_endpoint", 00:05:34.379 "vfu_virtio_scsi_remove_target", 00:05:34.379 "vfu_virtio_scsi_add_target", 00:05:34.379 "vfu_virtio_create_blk_endpoint", 00:05:34.379 "vfu_virtio_delete_endpoint", 00:05:34.379 "keyring_file_remove_key", 00:05:34.379 "keyring_file_add_key", 00:05:34.379 "keyring_linux_set_options", 00:05:34.379 "fsdev_aio_delete", 00:05:34.379 "fsdev_aio_create", 00:05:34.379 "iscsi_get_histogram", 00:05:34.379 "iscsi_enable_histogram", 00:05:34.379 "iscsi_set_options", 00:05:34.379 "iscsi_get_auth_groups", 00:05:34.379 "iscsi_auth_group_remove_secret", 00:05:34.379 "iscsi_auth_group_add_secret", 00:05:34.379 "iscsi_delete_auth_group", 00:05:34.379 "iscsi_create_auth_group", 00:05:34.379 "iscsi_set_discovery_auth", 00:05:34.379 "iscsi_get_options", 00:05:34.379 "iscsi_target_node_request_logout", 00:05:34.379 "iscsi_target_node_set_redirect", 00:05:34.379 "iscsi_target_node_set_auth", 00:05:34.379 "iscsi_target_node_add_lun", 00:05:34.379 "iscsi_get_stats", 00:05:34.379 "iscsi_get_connections", 00:05:34.379 "iscsi_portal_group_set_auth", 00:05:34.379 "iscsi_start_portal_group", 00:05:34.379 "iscsi_delete_portal_group", 00:05:34.379 "iscsi_create_portal_group", 00:05:34.379 "iscsi_get_portal_groups", 00:05:34.379 "iscsi_delete_target_node", 00:05:34.379 "iscsi_target_node_remove_pg_ig_maps", 00:05:34.379 "iscsi_target_node_add_pg_ig_maps", 00:05:34.379 "iscsi_create_target_node", 00:05:34.379 "iscsi_get_target_nodes", 00:05:34.379 "iscsi_delete_initiator_group", 00:05:34.379 "iscsi_initiator_group_remove_initiators", 00:05:34.379 "iscsi_initiator_group_add_initiators", 00:05:34.379 "iscsi_create_initiator_group", 00:05:34.379 "iscsi_get_initiator_groups", 00:05:34.379 "nvmf_set_crdt", 00:05:34.379 "nvmf_set_config", 00:05:34.379 "nvmf_set_max_subsystems", 00:05:34.379 "nvmf_stop_mdns_prr", 00:05:34.379 "nvmf_publish_mdns_prr", 00:05:34.379 "nvmf_subsystem_get_listeners", 00:05:34.379 "nvmf_subsystem_get_qpairs", 00:05:34.379 "nvmf_subsystem_get_controllers", 00:05:34.379 "nvmf_get_stats", 00:05:34.379 "nvmf_get_transports", 00:05:34.379 "nvmf_create_transport", 00:05:34.379 "nvmf_get_targets", 00:05:34.379 "nvmf_delete_target", 00:05:34.379 "nvmf_create_target", 00:05:34.379 "nvmf_subsystem_allow_any_host", 00:05:34.379 "nvmf_subsystem_set_keys", 00:05:34.379 "nvmf_subsystem_remove_host", 00:05:34.379 "nvmf_subsystem_add_host", 00:05:34.379 "nvmf_ns_remove_host", 00:05:34.379 "nvmf_ns_add_host", 00:05:34.379 "nvmf_subsystem_remove_ns", 00:05:34.379 "nvmf_subsystem_set_ns_ana_group", 00:05:34.379 "nvmf_subsystem_add_ns", 00:05:34.379 "nvmf_subsystem_listener_set_ana_state", 00:05:34.379 "nvmf_discovery_get_referrals", 00:05:34.379 "nvmf_discovery_remove_referral", 00:05:34.379 "nvmf_discovery_add_referral", 00:05:34.379 "nvmf_subsystem_remove_listener", 00:05:34.379 "nvmf_subsystem_add_listener", 00:05:34.379 "nvmf_delete_subsystem", 00:05:34.379 "nvmf_create_subsystem", 00:05:34.379 "nvmf_get_subsystems", 00:05:34.379 "env_dpdk_get_mem_stats", 00:05:34.379 "nbd_get_disks", 00:05:34.379 "nbd_stop_disk", 00:05:34.379 "nbd_start_disk", 00:05:34.379 "ublk_recover_disk", 00:05:34.380 "ublk_get_disks", 00:05:34.380 "ublk_stop_disk", 00:05:34.380 "ublk_start_disk", 00:05:34.380 "ublk_destroy_target", 00:05:34.380 "ublk_create_target", 00:05:34.380 "virtio_blk_create_transport", 00:05:34.380 "virtio_blk_get_transports", 00:05:34.380 "vhost_controller_set_coalescing", 00:05:34.380 "vhost_get_controllers", 00:05:34.380 "vhost_delete_controller", 00:05:34.380 "vhost_create_blk_controller", 00:05:34.380 "vhost_scsi_controller_remove_target", 00:05:34.380 "vhost_scsi_controller_add_target", 00:05:34.380 "vhost_start_scsi_controller", 00:05:34.380 "vhost_create_scsi_controller", 00:05:34.380 "thread_set_cpumask", 00:05:34.380 "scheduler_set_options", 00:05:34.380 "framework_get_governor", 00:05:34.380 "framework_get_scheduler", 00:05:34.380 "framework_set_scheduler", 00:05:34.380 "framework_get_reactors", 00:05:34.380 "thread_get_io_channels", 00:05:34.380 "thread_get_pollers", 00:05:34.380 "thread_get_stats", 00:05:34.380 "framework_monitor_context_switch", 00:05:34.380 "spdk_kill_instance", 00:05:34.380 "log_enable_timestamps", 00:05:34.380 "log_get_flags", 00:05:34.380 "log_clear_flag", 00:05:34.380 "log_set_flag", 00:05:34.380 "log_get_level", 00:05:34.380 "log_set_level", 00:05:34.380 "log_get_print_level", 00:05:34.380 "log_set_print_level", 00:05:34.380 "framework_enable_cpumask_locks", 00:05:34.380 "framework_disable_cpumask_locks", 00:05:34.380 "framework_wait_init", 00:05:34.380 "framework_start_init", 00:05:34.380 "scsi_get_devices", 00:05:34.380 "bdev_get_histogram", 00:05:34.380 "bdev_enable_histogram", 00:05:34.380 "bdev_set_qos_limit", 00:05:34.380 "bdev_set_qd_sampling_period", 00:05:34.380 "bdev_get_bdevs", 00:05:34.380 "bdev_reset_iostat", 00:05:34.380 "bdev_get_iostat", 00:05:34.380 "bdev_examine", 00:05:34.380 "bdev_wait_for_examine", 00:05:34.380 "bdev_set_options", 00:05:34.380 "accel_get_stats", 00:05:34.380 "accel_set_options", 00:05:34.380 "accel_set_driver", 00:05:34.380 "accel_crypto_key_destroy", 00:05:34.380 "accel_crypto_keys_get", 00:05:34.380 "accel_crypto_key_create", 00:05:34.380 "accel_assign_opc", 00:05:34.380 "accel_get_module_info", 00:05:34.380 "accel_get_opc_assignments", 00:05:34.380 "vmd_rescan", 00:05:34.380 "vmd_remove_device", 00:05:34.380 "vmd_enable", 00:05:34.380 "sock_get_default_impl", 00:05:34.380 "sock_set_default_impl", 00:05:34.380 "sock_impl_set_options", 00:05:34.380 "sock_impl_get_options", 00:05:34.380 "iobuf_get_stats", 00:05:34.380 "iobuf_set_options", 00:05:34.380 "keyring_get_keys", 00:05:34.380 "vfu_tgt_set_base_path", 00:05:34.380 "framework_get_pci_devices", 00:05:34.380 "framework_get_config", 00:05:34.380 "framework_get_subsystems", 00:05:34.380 "fsdev_set_opts", 00:05:34.380 "fsdev_get_opts", 00:05:34.380 "trace_get_info", 00:05:34.380 "trace_get_tpoint_group_mask", 00:05:34.380 "trace_disable_tpoint_group", 00:05:34.380 "trace_enable_tpoint_group", 00:05:34.380 "trace_clear_tpoint_mask", 00:05:34.380 "trace_set_tpoint_mask", 00:05:34.380 "notify_get_notifications", 00:05:34.380 "notify_get_types", 00:05:34.380 "spdk_get_version", 00:05:34.380 "rpc_get_methods" 00:05:34.380 ] 00:05:34.380 12:11:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.380 12:11:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:34.380 12:11:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 110490 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 110490 ']' 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 110490 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.380 12:11:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110490 00:05:34.640 12:11:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.640 12:11:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.640 12:11:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110490' 00:05:34.640 killing process with pid 110490 00:05:34.640 12:11:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 110490 00:05:34.640 12:11:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 110490 00:05:34.901 00:05:34.901 real 0m1.109s 00:05:34.901 user 0m1.897s 00:05:34.901 sys 0m0.446s 00:05:34.901 12:11:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.901 12:11:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.901 ************************************ 00:05:34.901 END TEST spdkcli_tcp 00:05:34.901 ************************************ 00:05:34.901 12:11:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.901 12:11:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.901 12:11:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.901 12:11:02 -- common/autotest_common.sh@10 -- # set +x 00:05:34.901 ************************************ 00:05:34.901 START TEST dpdk_mem_utility 00:05:34.901 ************************************ 00:05:34.901 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.901 * Looking for test storage... 00:05:34.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:34.901 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.901 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.901 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.161 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.161 12:11:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:35.161 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.161 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.161 --rc genhtml_branch_coverage=1 00:05:35.161 --rc genhtml_function_coverage=1 00:05:35.161 --rc genhtml_legend=1 00:05:35.161 --rc geninfo_all_blocks=1 00:05:35.161 --rc geninfo_unexecuted_blocks=1 00:05:35.161 00:05:35.161 ' 00:05:35.161 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.161 --rc genhtml_branch_coverage=1 00:05:35.161 --rc genhtml_function_coverage=1 00:05:35.161 --rc genhtml_legend=1 00:05:35.161 --rc geninfo_all_blocks=1 00:05:35.161 --rc geninfo_unexecuted_blocks=1 00:05:35.161 00:05:35.161 ' 00:05:35.161 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.161 --rc genhtml_branch_coverage=1 00:05:35.161 --rc genhtml_function_coverage=1 00:05:35.161 --rc genhtml_legend=1 00:05:35.161 --rc geninfo_all_blocks=1 00:05:35.161 --rc geninfo_unexecuted_blocks=1 00:05:35.161 00:05:35.161 ' 00:05:35.161 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.161 --rc genhtml_branch_coverage=1 00:05:35.161 --rc genhtml_function_coverage=1 00:05:35.161 --rc genhtml_legend=1 00:05:35.161 --rc geninfo_all_blocks=1 00:05:35.161 --rc geninfo_unexecuted_blocks=1 00:05:35.161 00:05:35.161 ' 00:05:35.161 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.161 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=110760 00:05:35.161 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 110760 00:05:35.161 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.162 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 110760 ']' 00:05:35.162 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.162 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.162 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.162 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.162 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.162 [2024-12-13 12:11:02.687847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:35.162 [2024-12-13 12:11:02.687893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110760 ] 00:05:35.162 [2024-12-13 12:11:02.761035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.162 [2024-12-13 12:11:02.783654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.422 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.422 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:35.422 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.422 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.422 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.422 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.422 { 00:05:35.422 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.422 } 00:05:35.422 12:11:02 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.422 12:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.422 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:35.422 1 heaps totaling size 818.000000 MiB 00:05:35.422 size: 818.000000 MiB heap id: 0 00:05:35.422 end heaps---------- 00:05:35.422 9 mempools totaling size 603.782043 MiB 00:05:35.422 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:35.422 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:35.422 size: 100.555481 MiB name: bdev_io_110760 00:05:35.422 size: 50.003479 MiB name: msgpool_110760 00:05:35.422 size: 36.509338 MiB name: fsdev_io_110760 00:05:35.422 size: 21.763794 MiB name: PDU_Pool 00:05:35.422 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:35.422 size: 4.133484 MiB name: evtpool_110760 00:05:35.422 size: 0.026123 MiB name: Session_Pool 00:05:35.422 end mempools------- 00:05:35.422 6 memzones totaling size 4.142822 MiB 00:05:35.422 size: 1.000366 MiB name: RG_ring_0_110760 00:05:35.422 size: 1.000366 MiB name: RG_ring_1_110760 00:05:35.422 size: 1.000366 MiB name: RG_ring_4_110760 00:05:35.422 size: 1.000366 MiB name: RG_ring_5_110760 00:05:35.422 size: 0.125366 MiB name: RG_ring_2_110760 00:05:35.422 size: 0.015991 MiB name: RG_ring_3_110760 00:05:35.422 end memzones------- 00:05:35.422 12:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:35.422 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:35.422 list of free elements. size: 10.852478 MiB 00:05:35.422 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:35.422 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:35.422 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:35.422 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:35.422 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:35.422 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:35.422 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:35.422 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:35.422 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:35.422 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:35.422 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:35.422 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:35.422 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:35.422 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:35.422 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:35.422 list of standard malloc elements. size: 199.218628 MiB 00:05:35.422 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:35.422 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:35.422 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:35.422 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:35.422 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:35.422 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:35.422 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:35.422 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:35.422 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:35.422 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:35.422 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:35.422 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:35.422 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:35.422 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:35.422 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:35.422 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:35.422 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:35.423 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:35.423 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:35.423 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:35.423 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:35.423 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:35.423 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:35.423 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:35.423 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:35.423 list of memzone associated elements. size: 607.928894 MiB 00:05:35.423 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:35.423 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:35.423 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:35.423 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:35.423 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:35.423 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_110760_0 00:05:35.423 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:35.423 associated memzone info: size: 48.002930 MiB name: MP_msgpool_110760_0 00:05:35.423 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:35.423 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_110760_0 00:05:35.423 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:35.423 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:35.423 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:35.423 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:35.423 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:35.423 associated memzone info: size: 3.000122 MiB name: MP_evtpool_110760_0 00:05:35.423 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:35.423 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_110760 00:05:35.423 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:35.423 associated memzone info: size: 1.007996 MiB name: MP_evtpool_110760 00:05:35.423 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:35.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:35.423 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:35.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:35.423 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:35.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:35.423 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:35.423 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:35.423 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:35.423 associated memzone info: size: 1.000366 MiB name: RG_ring_0_110760 00:05:35.423 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:35.423 associated memzone info: size: 1.000366 MiB name: RG_ring_1_110760 00:05:35.423 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:35.423 associated memzone info: size: 1.000366 MiB name: RG_ring_4_110760 00:05:35.423 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:35.423 associated memzone info: size: 1.000366 MiB name: RG_ring_5_110760 00:05:35.423 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:35.423 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_110760 00:05:35.423 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:35.423 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_110760 00:05:35.423 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:35.423 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:35.423 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:35.423 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:35.423 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:35.423 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:35.423 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:35.423 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_110760 00:05:35.423 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:35.423 associated memzone info: size: 0.125366 MiB name: RG_ring_2_110760 00:05:35.423 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:35.423 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:35.423 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:35.423 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:35.423 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:35.423 associated memzone info: size: 0.015991 MiB name: RG_ring_3_110760 00:05:35.423 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:35.423 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:35.423 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:35.423 associated memzone info: size: 0.000183 MiB name: MP_msgpool_110760 00:05:35.423 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:35.423 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_110760 00:05:35.423 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:35.423 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_110760 00:05:35.423 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:35.423 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:35.423 12:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:35.423 12:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 110760 00:05:35.423 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 110760 ']' 00:05:35.423 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 110760 00:05:35.423 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:35.423 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.423 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110760 00:05:35.684 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.684 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.684 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110760' 00:05:35.684 killing process with pid 110760 00:05:35.684 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 110760 00:05:35.684 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 110760 00:05:35.944 00:05:35.944 real 0m0.973s 00:05:35.944 user 0m0.887s 00:05:35.944 sys 0m0.413s 00:05:35.944 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.944 12:11:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.944 ************************************ 00:05:35.944 END TEST dpdk_mem_utility 00:05:35.944 ************************************ 00:05:35.944 12:11:03 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.944 12:11:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.944 12:11:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.944 12:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:35.944 ************************************ 00:05:35.944 START TEST event 00:05:35.944 ************************************ 00:05:35.944 12:11:03 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.944 * Looking for test storage... 00:05:35.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.944 12:11:03 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.944 12:11:03 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.944 12:11:03 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.204 12:11:03 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.204 12:11:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.205 12:11:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.205 12:11:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.205 12:11:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.205 12:11:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.205 12:11:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.205 12:11:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.205 12:11:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.205 12:11:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.205 12:11:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.205 12:11:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.205 12:11:03 event -- scripts/common.sh@344 -- # case "$op" in 00:05:36.205 12:11:03 event -- scripts/common.sh@345 -- # : 1 00:05:36.205 12:11:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.205 12:11:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.205 12:11:03 event -- scripts/common.sh@365 -- # decimal 1 00:05:36.205 12:11:03 event -- scripts/common.sh@353 -- # local d=1 00:05:36.205 12:11:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.205 12:11:03 event -- scripts/common.sh@355 -- # echo 1 00:05:36.205 12:11:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.205 12:11:03 event -- scripts/common.sh@366 -- # decimal 2 00:05:36.205 12:11:03 event -- scripts/common.sh@353 -- # local d=2 00:05:36.205 12:11:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.205 12:11:03 event -- scripts/common.sh@355 -- # echo 2 00:05:36.205 12:11:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.205 12:11:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.205 12:11:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.205 12:11:03 event -- scripts/common.sh@368 -- # return 0 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.205 --rc genhtml_branch_coverage=1 00:05:36.205 --rc genhtml_function_coverage=1 00:05:36.205 --rc genhtml_legend=1 00:05:36.205 --rc geninfo_all_blocks=1 00:05:36.205 --rc geninfo_unexecuted_blocks=1 00:05:36.205 00:05:36.205 ' 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.205 --rc genhtml_branch_coverage=1 00:05:36.205 --rc genhtml_function_coverage=1 00:05:36.205 --rc genhtml_legend=1 00:05:36.205 --rc geninfo_all_blocks=1 00:05:36.205 --rc geninfo_unexecuted_blocks=1 00:05:36.205 00:05:36.205 ' 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.205 --rc genhtml_branch_coverage=1 00:05:36.205 --rc genhtml_function_coverage=1 00:05:36.205 --rc genhtml_legend=1 00:05:36.205 --rc geninfo_all_blocks=1 00:05:36.205 --rc geninfo_unexecuted_blocks=1 00:05:36.205 00:05:36.205 ' 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.205 --rc genhtml_branch_coverage=1 00:05:36.205 --rc genhtml_function_coverage=1 00:05:36.205 --rc genhtml_legend=1 00:05:36.205 --rc geninfo_all_blocks=1 00:05:36.205 --rc geninfo_unexecuted_blocks=1 00:05:36.205 00:05:36.205 ' 00:05:36.205 12:11:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:36.205 12:11:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.205 12:11:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:36.205 12:11:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.205 12:11:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.205 ************************************ 00:05:36.205 START TEST event_perf 00:05:36.205 ************************************ 00:05:36.205 12:11:03 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.205 Running I/O for 1 seconds...[2024-12-13 12:11:03.732078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:36.205 [2024-12-13 12:11:03.732171] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111044 ] 00:05:36.205 [2024-12-13 12:11:03.806594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.205 [2024-12-13 12:11:03.832490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.205 [2024-12-13 12:11:03.832594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.205 [2024-12-13 12:11:03.832702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.205 Running I/O for 1 seconds...[2024-12-13 12:11:03.832703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.585 00:05:37.585 lcore 0: 202036 00:05:37.585 lcore 1: 202037 00:05:37.585 lcore 2: 202039 00:05:37.585 lcore 3: 202037 00:05:37.585 done. 00:05:37.585 00:05:37.585 real 0m1.156s 00:05:37.585 user 0m4.074s 00:05:37.585 sys 0m0.079s 00:05:37.585 12:11:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.585 12:11:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.585 ************************************ 00:05:37.585 END TEST event_perf 00:05:37.585 ************************************ 00:05:37.585 12:11:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.585 12:11:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.585 12:11:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.585 12:11:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.585 ************************************ 00:05:37.585 START TEST event_reactor 00:05:37.585 ************************************ 00:05:37.585 12:11:04 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.585 [2024-12-13 12:11:04.957931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:37.586 [2024-12-13 12:11:04.957999] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111296 ] 00:05:37.586 [2024-12-13 12:11:05.037090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.586 [2024-12-13 12:11:05.060838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.525 test_start 00:05:38.525 oneshot 00:05:38.525 tick 100 00:05:38.525 tick 100 00:05:38.525 tick 250 00:05:38.525 tick 100 00:05:38.525 tick 100 00:05:38.525 tick 100 00:05:38.525 tick 250 00:05:38.525 tick 500 00:05:38.525 tick 100 00:05:38.525 tick 100 00:05:38.525 tick 250 00:05:38.525 tick 100 00:05:38.525 tick 100 00:05:38.525 test_end 00:05:38.525 00:05:38.525 real 0m1.155s 00:05:38.525 user 0m1.071s 00:05:38.525 sys 0m0.079s 00:05:38.525 12:11:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.525 12:11:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 ************************************ 00:05:38.525 END TEST event_reactor 00:05:38.525 ************************************ 00:05:38.526 12:11:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.526 12:11:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:38.526 12:11:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.526 12:11:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.526 ************************************ 00:05:38.526 START TEST event_reactor_perf 00:05:38.526 ************************************ 00:05:38.526 12:11:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.526 [2024-12-13 12:11:06.183617] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:38.526 [2024-12-13 12:11:06.183685] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111542 ] 00:05:38.785 [2024-12-13 12:11:06.261873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.785 [2024-12-13 12:11:06.283237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.725 test_start 00:05:39.725 test_end 00:05:39.725 Performance: 516552 events per second 00:05:39.725 00:05:39.725 real 0m1.150s 00:05:39.725 user 0m1.072s 00:05:39.725 sys 0m0.074s 00:05:39.725 12:11:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.725 12:11:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.725 ************************************ 00:05:39.725 END TEST event_reactor_perf 00:05:39.725 ************************************ 00:05:39.725 12:11:07 event -- event/event.sh@49 -- # uname -s 00:05:39.725 12:11:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.725 12:11:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.725 12:11:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.725 12:11:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.725 12:11:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.725 ************************************ 00:05:39.725 START TEST event_scheduler 00:05:39.725 ************************************ 00:05:39.725 12:11:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.985 * Looking for test storage... 00:05:39.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:39.985 12:11:07 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.985 12:11:07 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.985 12:11:07 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.985 12:11:07 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.985 12:11:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.986 12:11:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.986 12:11:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.986 --rc genhtml_branch_coverage=1 00:05:39.986 --rc genhtml_function_coverage=1 00:05:39.986 --rc genhtml_legend=1 00:05:39.986 --rc geninfo_all_blocks=1 00:05:39.986 --rc geninfo_unexecuted_blocks=1 00:05:39.986 00:05:39.986 ' 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.986 --rc genhtml_branch_coverage=1 00:05:39.986 --rc genhtml_function_coverage=1 00:05:39.986 --rc genhtml_legend=1 00:05:39.986 --rc geninfo_all_blocks=1 00:05:39.986 --rc geninfo_unexecuted_blocks=1 00:05:39.986 00:05:39.986 ' 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.986 --rc genhtml_branch_coverage=1 00:05:39.986 --rc genhtml_function_coverage=1 00:05:39.986 --rc genhtml_legend=1 00:05:39.986 --rc geninfo_all_blocks=1 00:05:39.986 --rc geninfo_unexecuted_blocks=1 00:05:39.986 00:05:39.986 ' 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.986 --rc genhtml_branch_coverage=1 00:05:39.986 --rc genhtml_function_coverage=1 00:05:39.986 --rc genhtml_legend=1 00:05:39.986 --rc geninfo_all_blocks=1 00:05:39.986 --rc geninfo_unexecuted_blocks=1 00:05:39.986 00:05:39.986 ' 00:05:39.986 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.986 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=111820 00:05:39.986 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.986 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.986 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 111820 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 111820 ']' 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.986 12:11:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.986 [2024-12-13 12:11:07.608938] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:39.986 [2024-12-13 12:11:07.608982] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111820 ] 00:05:39.986 [2024-12-13 12:11:07.680689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.247 [2024-12-13 12:11:07.706026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.247 [2024-12-13 12:11:07.706217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.247 [2024-12-13 12:11:07.706134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.247 [2024-12-13 12:11:07.706217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:40.247 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 [2024-12-13 12:11:07.770847] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:40.247 [2024-12-13 12:11:07.770865] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:40.247 [2024-12-13 12:11:07.770874] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:40.247 [2024-12-13 12:11:07.770880] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:40.247 [2024-12-13 12:11:07.770885] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 [2024-12-13 12:11:07.840943] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 ************************************ 00:05:40.247 START TEST scheduler_create_thread 00:05:40.247 ************************************ 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 2 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 3 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 4 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 5 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 6 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 7 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 8 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.247 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.507 9 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.507 10 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.507 12:11:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.077 12:11:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.077 12:11:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.077 12:11:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.077 12:11:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.460 12:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.460 12:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.460 12:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.460 12:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.460 12:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.399 12:11:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.399 00:05:43.399 real 0m3.102s 00:05:43.399 user 0m0.022s 00:05:43.399 sys 0m0.007s 00:05:43.399 12:11:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.399 12:11:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.399 ************************************ 00:05:43.399 END TEST scheduler_create_thread 00:05:43.399 ************************************ 00:05:43.399 12:11:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.399 12:11:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 111820 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 111820 ']' 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 111820 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111820 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111820' 00:05:43.399 killing process with pid 111820 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 111820 00:05:43.399 12:11:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 111820 00:05:43.659 [2024-12-13 12:11:11.356171] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.919 00:05:43.919 real 0m4.149s 00:05:43.919 user 0m6.665s 00:05:43.919 sys 0m0.391s 00:05:43.919 12:11:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.919 12:11:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.919 ************************************ 00:05:43.919 END TEST event_scheduler 00:05:43.919 ************************************ 00:05:43.919 12:11:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.919 12:11:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.919 12:11:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.919 12:11:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.919 12:11:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.180 ************************************ 00:05:44.180 START TEST app_repeat 00:05:44.180 ************************************ 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112540 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112540' 00:05:44.180 Process app_repeat pid: 112540 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.180 spdk_app_start Round 0 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112540 /var/tmp/spdk-nbd.sock 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112540 ']' 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.180 [2024-12-13 12:11:11.654342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.180 [2024-12-13 12:11:11.654390] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112540 ] 00:05:44.180 [2024-12-13 12:11:11.727520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.180 [2024-12-13 12:11:11.752458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.180 [2024-12-13 12:11:11.752460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.180 12:11:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.180 12:11:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.440 Malloc0 00:05:44.440 12:11:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.700 Malloc1 00:05:44.700 12:11:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.700 12:11:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.960 /dev/nbd0 00:05:44.960 12:11:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.960 12:11:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.960 1+0 records in 00:05:44.960 1+0 records out 00:05:44.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193961 s, 21.1 MB/s 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.960 12:11:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.960 12:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.960 12:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.960 12:11:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.220 /dev/nbd1 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.220 1+0 records in 00:05:45.220 1+0 records out 00:05:45.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190298 s, 21.5 MB/s 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.220 12:11:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.220 12:11:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.480 { 00:05:45.480 "nbd_device": "/dev/nbd0", 00:05:45.480 "bdev_name": "Malloc0" 00:05:45.480 }, 00:05:45.480 { 00:05:45.480 "nbd_device": "/dev/nbd1", 00:05:45.480 "bdev_name": "Malloc1" 00:05:45.480 } 00:05:45.480 ]' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.480 { 00:05:45.480 "nbd_device": "/dev/nbd0", 00:05:45.480 "bdev_name": "Malloc0" 00:05:45.480 }, 00:05:45.480 { 00:05:45.480 "nbd_device": "/dev/nbd1", 00:05:45.480 "bdev_name": "Malloc1" 00:05:45.480 } 00:05:45.480 ]' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.480 /dev/nbd1' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.480 /dev/nbd1' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.480 256+0 records in 00:05:45.480 256+0 records out 00:05:45.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106764 s, 98.2 MB/s 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.480 256+0 records in 00:05:45.480 256+0 records out 00:05:45.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132979 s, 78.9 MB/s 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.480 256+0 records in 00:05:45.480 256+0 records out 00:05:45.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144911 s, 72.4 MB/s 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.480 12:11:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.740 12:11:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.000 12:11:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.259 12:11:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.259 12:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.259 12:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.259 12:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.260 12:11:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.260 12:11:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.520 12:11:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.520 [2024-12-13 12:11:14.212725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.780 [2024-12-13 12:11:14.233347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.780 [2024-12-13 12:11:14.233348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.780 [2024-12-13 12:11:14.273534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.780 [2024-12-13 12:11:14.273570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.075 12:11:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.075 12:11:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:50.075 spdk_app_start Round 1 00:05:50.075 12:11:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112540 /var/tmp/spdk-nbd.sock 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112540 ']' 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.075 12:11:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.075 12:11:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.075 Malloc0 00:05:50.075 12:11:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.075 Malloc1 00:05:50.075 12:11:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.075 12:11:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.335 /dev/nbd0 00:05:50.335 12:11:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.335 12:11:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.335 1+0 records in 00:05:50.335 1+0 records out 00:05:50.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223075 s, 18.4 MB/s 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.335 12:11:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.335 12:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.335 12:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.335 12:11:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.594 /dev/nbd1 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.594 1+0 records in 00:05:50.594 1+0 records out 00:05:50.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194526 s, 21.1 MB/s 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.594 12:11:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.594 12:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.852 { 00:05:50.852 "nbd_device": "/dev/nbd0", 00:05:50.852 "bdev_name": "Malloc0" 00:05:50.852 }, 00:05:50.852 { 00:05:50.852 "nbd_device": "/dev/nbd1", 00:05:50.852 "bdev_name": "Malloc1" 00:05:50.852 } 00:05:50.852 ]' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.852 { 00:05:50.852 "nbd_device": "/dev/nbd0", 00:05:50.852 "bdev_name": "Malloc0" 00:05:50.852 }, 00:05:50.852 { 00:05:50.852 "nbd_device": "/dev/nbd1", 00:05:50.852 "bdev_name": "Malloc1" 00:05:50.852 } 00:05:50.852 ]' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.852 /dev/nbd1' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.852 /dev/nbd1' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.852 256+0 records in 00:05:50.852 256+0 records out 00:05:50.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010673 s, 98.2 MB/s 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.852 256+0 records in 00:05:50.852 256+0 records out 00:05:50.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136149 s, 77.0 MB/s 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.852 256+0 records in 00:05:50.852 256+0 records out 00:05:50.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148343 s, 70.7 MB/s 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.852 12:11:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.853 12:11:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.111 12:11:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.371 12:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.631 12:11:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.631 12:11:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.891 12:11:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.150 [2024-12-13 12:11:19.593526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.150 [2024-12-13 12:11:19.614070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.150 [2024-12-13 12:11:19.614071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.150 [2024-12-13 12:11:19.655174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.150 [2024-12-13 12:11:19.655214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.440 12:11:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.440 12:11:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:55.440 spdk_app_start Round 2 00:05:55.440 12:11:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112540 /var/tmp/spdk-nbd.sock 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112540 ']' 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.440 12:11:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.440 12:11:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.440 Malloc0 00:05:55.440 12:11:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.440 Malloc1 00:05:55.440 12:11:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.440 12:11:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.699 /dev/nbd0 00:05:55.699 12:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.699 12:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.699 12:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.700 12:11:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.700 1+0 records in 00:05:55.700 1+0 records out 00:05:55.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195833 s, 20.9 MB/s 00:05:55.700 12:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.700 12:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.700 12:11:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.700 12:11:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.700 12:11:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.700 12:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.700 12:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.700 12:11:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.959 /dev/nbd1 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.959 1+0 records in 00:05:55.959 1+0 records out 00:05:55.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244976 s, 16.7 MB/s 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.959 12:11:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.959 12:11:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.219 { 00:05:56.219 "nbd_device": "/dev/nbd0", 00:05:56.219 "bdev_name": "Malloc0" 00:05:56.219 }, 00:05:56.219 { 00:05:56.219 "nbd_device": "/dev/nbd1", 00:05:56.219 "bdev_name": "Malloc1" 00:05:56.219 } 00:05:56.219 ]' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.219 { 00:05:56.219 "nbd_device": "/dev/nbd0", 00:05:56.219 "bdev_name": "Malloc0" 00:05:56.219 }, 00:05:56.219 { 00:05:56.219 "nbd_device": "/dev/nbd1", 00:05:56.219 "bdev_name": "Malloc1" 00:05:56.219 } 00:05:56.219 ]' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.219 /dev/nbd1' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.219 /dev/nbd1' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.219 256+0 records in 00:05:56.219 256+0 records out 00:05:56.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101154 s, 104 MB/s 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.219 256+0 records in 00:05:56.219 256+0 records out 00:05:56.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014442 s, 72.6 MB/s 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.219 256+0 records in 00:05:56.219 256+0 records out 00:05:56.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014692 s, 71.4 MB/s 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.219 12:11:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.478 12:11:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.737 12:11:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.996 12:11:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.996 12:11:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.255 12:11:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.255 [2024-12-13 12:11:24.909578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.255 [2024-12-13 12:11:24.929639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.255 [2024-12-13 12:11:24.929639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.519 [2024-12-13 12:11:24.970211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.519 [2024-12-13 12:11:24.970247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.809 12:11:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112540 /var/tmp/spdk-nbd.sock 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112540 ']' 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.809 12:11:27 event.app_repeat -- event/event.sh@39 -- # killprocess 112540 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 112540 ']' 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 112540 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.809 12:11:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112540 00:06:00.809 12:11:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.809 12:11:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.809 12:11:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112540' 00:06:00.809 killing process with pid 112540 00:06:00.810 12:11:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 112540 00:06:00.810 12:11:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 112540 00:06:00.810 spdk_app_start is called in Round 0. 00:06:00.810 Shutdown signal received, stop current app iteration 00:06:00.810 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:00.810 spdk_app_start is called in Round 1. 00:06:00.810 Shutdown signal received, stop current app iteration 00:06:00.810 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:00.810 spdk_app_start is called in Round 2. 00:06:00.810 Shutdown signal received, stop current app iteration 00:06:00.810 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:00.810 spdk_app_start is called in Round 3. 00:06:00.810 Shutdown signal received, stop current app iteration 00:06:00.810 12:11:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.810 12:11:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.810 00:06:00.810 real 0m16.535s 00:06:00.810 user 0m36.483s 00:06:00.810 sys 0m2.620s 00:06:00.810 12:11:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.810 12:11:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.810 ************************************ 00:06:00.810 END TEST app_repeat 00:06:00.810 ************************************ 00:06:00.810 12:11:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.810 12:11:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.810 12:11:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.810 12:11:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.810 12:11:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.810 ************************************ 00:06:00.810 START TEST cpu_locks 00:06:00.810 ************************************ 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.810 * Looking for test storage... 00:06:00.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.810 12:11:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.810 --rc genhtml_branch_coverage=1 00:06:00.810 --rc genhtml_function_coverage=1 00:06:00.810 --rc genhtml_legend=1 00:06:00.810 --rc geninfo_all_blocks=1 00:06:00.810 --rc geninfo_unexecuted_blocks=1 00:06:00.810 00:06:00.810 ' 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.810 --rc genhtml_branch_coverage=1 00:06:00.810 --rc genhtml_function_coverage=1 00:06:00.810 --rc genhtml_legend=1 00:06:00.810 --rc geninfo_all_blocks=1 00:06:00.810 --rc geninfo_unexecuted_blocks=1 00:06:00.810 00:06:00.810 ' 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.810 --rc genhtml_branch_coverage=1 00:06:00.810 --rc genhtml_function_coverage=1 00:06:00.810 --rc genhtml_legend=1 00:06:00.810 --rc geninfo_all_blocks=1 00:06:00.810 --rc geninfo_unexecuted_blocks=1 00:06:00.810 00:06:00.810 ' 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.810 --rc genhtml_branch_coverage=1 00:06:00.810 --rc genhtml_function_coverage=1 00:06:00.810 --rc genhtml_legend=1 00:06:00.810 --rc geninfo_all_blocks=1 00:06:00.810 --rc geninfo_unexecuted_blocks=1 00:06:00.810 00:06:00.810 ' 00:06:00.810 12:11:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.810 12:11:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.810 12:11:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.810 12:11:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.810 12:11:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.810 ************************************ 00:06:00.810 START TEST default_locks 00:06:00.810 ************************************ 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=115467 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 115467 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115467 ']' 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.810 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.810 [2024-12-13 12:11:28.480235] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:00.810 [2024-12-13 12:11:28.480273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115467 ] 00:06:01.069 [2024-12-13 12:11:28.554332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.069 [2024-12-13 12:11:28.576321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.328 lslocks: write error 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 115467 ']' 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115467' 00:06:01.328 killing process with pid 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 115467 00:06:01.328 12:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 115467 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 115467 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 115467 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 115467 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115467 ']' 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (115467) - No such process 00:06:01.588 ERROR: process (pid: 115467) is no longer running 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.588 00:06:01.588 real 0m0.848s 00:06:01.588 user 0m0.787s 00:06:01.588 sys 0m0.417s 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.588 12:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.588 ************************************ 00:06:01.588 END TEST default_locks 00:06:01.588 ************************************ 00:06:01.847 12:11:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.847 12:11:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.847 12:11:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.847 12:11:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.847 ************************************ 00:06:01.847 START TEST default_locks_via_rpc 00:06:01.847 ************************************ 00:06:01.847 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=115716 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 115716 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 115716 ']' 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.848 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.848 [2024-12-13 12:11:29.402318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:01.848 [2024-12-13 12:11:29.402359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115716 ] 00:06:01.848 [2024-12-13 12:11:29.474714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.848 [2024-12-13 12:11:29.497391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 115716 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 115716 00:06:02.107 12:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 115716 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 115716 ']' 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 115716 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115716 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115716' 00:06:02.676 killing process with pid 115716 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 115716 00:06:02.676 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 115716 00:06:02.935 00:06:02.935 real 0m1.101s 00:06:02.935 user 0m1.056s 00:06:02.935 sys 0m0.510s 00:06:02.935 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.935 12:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.935 ************************************ 00:06:02.935 END TEST default_locks_via_rpc 00:06:02.935 ************************************ 00:06:02.935 12:11:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.935 12:11:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.935 12:11:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.935 12:11:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.935 ************************************ 00:06:02.935 START TEST non_locking_app_on_locked_coremask 00:06:02.935 ************************************ 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=115964 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 115964 /var/tmp/spdk.sock 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 115964 ']' 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.935 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.935 [2024-12-13 12:11:30.572027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:02.935 [2024-12-13 12:11:30.572065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115964 ] 00:06:03.194 [2024-12-13 12:11:30.644517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.194 [2024-12-13 12:11:30.666294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=115974 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 115974 /var/tmp/spdk2.sock 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 115974 ']' 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.194 12:11:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.453 [2024-12-13 12:11:30.924109] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:03.453 [2024-12-13 12:11:30.924154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115974 ] 00:06:03.453 [2024-12-13 12:11:31.007556] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.453 [2024-12-13 12:11:31.007576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.453 [2024-12-13 12:11:31.053805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.390 12:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.390 12:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.390 12:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 115964 00:06:04.390 12:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.390 12:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115964 00:06:04.959 lslocks: write error 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 115964 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 115964 ']' 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 115964 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115964 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115964' 00:06:04.959 killing process with pid 115964 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 115964 00:06:04.959 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 115964 00:06:05.527 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 115974 00:06:05.527 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 115974 ']' 00:06:05.527 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 115974 00:06:05.527 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.527 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.528 12:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115974 00:06:05.528 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.528 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.528 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115974' 00:06:05.528 killing process with pid 115974 00:06:05.528 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 115974 00:06:05.528 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 115974 00:06:05.787 00:06:05.787 real 0m2.802s 00:06:05.787 user 0m2.921s 00:06:05.787 sys 0m0.965s 00:06:05.787 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.787 12:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.787 ************************************ 00:06:05.787 END TEST non_locking_app_on_locked_coremask 00:06:05.787 ************************************ 00:06:05.787 12:11:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.787 12:11:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.787 12:11:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.787 12:11:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.787 ************************************ 00:06:05.787 START TEST locking_app_on_unlocked_coremask 00:06:05.787 ************************************ 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116453 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 116453 /var/tmp/spdk.sock 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116453 ']' 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.787 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.787 [2024-12-13 12:11:33.445030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:05.787 [2024-12-13 12:11:33.445071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116453 ] 00:06:06.046 [2024-12-13 12:11:33.515918] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.046 [2024-12-13 12:11:33.515944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.046 [2024-12-13 12:11:33.538868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=116462 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 116462 /var/tmp/spdk2.sock 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116462 ']' 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.046 12:11:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 [2024-12-13 12:11:33.791774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:06.304 [2024-12-13 12:11:33.791820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116462 ] 00:06:06.305 [2024-12-13 12:11:33.877738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.305 [2024-12-13 12:11:33.925747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.239 12:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.239 12:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.239 12:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 116462 00:06:07.239 12:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116462 00:06:07.239 12:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.805 lslocks: write error 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 116453 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116453 ']' 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116453 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116453 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116453' 00:06:07.805 killing process with pid 116453 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116453 00:06:07.805 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116453 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 116462 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116462 ']' 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116462 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116462 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116462' 00:06:08.372 killing process with pid 116462 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116462 00:06:08.372 12:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116462 00:06:08.630 00:06:08.630 real 0m2.876s 00:06:08.630 user 0m3.028s 00:06:08.630 sys 0m0.979s 00:06:08.630 12:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.630 12:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.630 ************************************ 00:06:08.630 END TEST locking_app_on_unlocked_coremask 00:06:08.630 ************************************ 00:06:08.630 12:11:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.630 12:11:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.630 12:11:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.630 12:11:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.889 ************************************ 00:06:08.890 START TEST locking_app_on_locked_coremask 00:06:08.890 ************************************ 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=116942 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 116942 /var/tmp/spdk.sock 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116942 ']' 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.890 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.890 [2024-12-13 12:11:36.391031] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:08.890 [2024-12-13 12:11:36.391072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116942 ] 00:06:08.890 [2024-12-13 12:11:36.463310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.890 [2024-12-13 12:11:36.486062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=116956 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 116956 /var/tmp/spdk2.sock 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 116956 /var/tmp/spdk2.sock 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 116956 /var/tmp/spdk2.sock 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116956 ']' 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.149 12:11:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.149 [2024-12-13 12:11:36.733675] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:09.149 [2024-12-13 12:11:36.733721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116956 ] 00:06:09.149 [2024-12-13 12:11:36.821126] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 116942 has claimed it. 00:06:09.149 [2024-12-13 12:11:36.821162] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (116956) - No such process 00:06:09.717 ERROR: process (pid: 116956) is no longer running 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 116942 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116942 00:06:09.717 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.976 lslocks: write error 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 116942 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116942 ']' 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116942 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116942 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116942' 00:06:09.976 killing process with pid 116942 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116942 00:06:09.976 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116942 00:06:10.544 00:06:10.544 real 0m1.623s 00:06:10.544 user 0m1.746s 00:06:10.544 sys 0m0.537s 00:06:10.544 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.544 12:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.544 ************************************ 00:06:10.544 END TEST locking_app_on_locked_coremask 00:06:10.544 ************************************ 00:06:10.544 12:11:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.544 12:11:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.544 12:11:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.544 12:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.544 ************************************ 00:06:10.544 START TEST locking_overlapped_coremask 00:06:10.544 ************************************ 00:06:10.544 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.544 12:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117211 00:06:10.544 12:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 117211 /var/tmp/spdk.sock 00:06:10.544 12:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.544 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117211 ']' 00:06:10.544 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.545 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.545 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.545 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.545 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.545 [2024-12-13 12:11:38.079040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:10.545 [2024-12-13 12:11:38.079081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117211 ] 00:06:10.545 [2024-12-13 12:11:38.151380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.545 [2024-12-13 12:11:38.176878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.545 [2024-12-13 12:11:38.176990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.545 [2024-12-13 12:11:38.176991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117417 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117417 /var/tmp/spdk2.sock 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117417 /var/tmp/spdk2.sock 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117417 /var/tmp/spdk2.sock 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117417 ']' 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.804 12:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.804 [2024-12-13 12:11:38.422497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:10.804 [2024-12-13 12:11:38.422544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117417 ] 00:06:11.062 [2024-12-13 12:11:38.514059] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117211 has claimed it. 00:06:11.063 [2024-12-13 12:11:38.514094] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117417) - No such process 00:06:11.630 ERROR: process (pid: 117417) is no longer running 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 117211 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 117211 ']' 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 117211 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117211 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117211' 00:06:11.630 killing process with pid 117211 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 117211 00:06:11.630 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 117211 00:06:11.889 00:06:11.889 real 0m1.372s 00:06:11.889 user 0m3.816s 00:06:11.889 sys 0m0.374s 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.889 ************************************ 00:06:11.889 END TEST locking_overlapped_coremask 00:06:11.889 ************************************ 00:06:11.889 12:11:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.889 12:11:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.889 12:11:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.889 12:11:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.889 ************************************ 00:06:11.889 START TEST locking_overlapped_coremask_via_rpc 00:06:11.889 ************************************ 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117509 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 117509 /var/tmp/spdk.sock 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117509 ']' 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.889 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.890 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.890 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.890 [2024-12-13 12:11:39.525917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:11.890 [2024-12-13 12:11:39.525962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117509 ] 00:06:12.149 [2024-12-13 12:11:39.601331] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.149 [2024-12-13 12:11:39.601356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.149 [2024-12-13 12:11:39.627106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.149 [2024-12-13 12:11:39.627212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.149 [2024-12-13 12:11:39.627212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117698 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 117698 /var/tmp/spdk2.sock 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117698 ']' 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.149 12:11:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.408 [2024-12-13 12:11:39.879432] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:12.408 [2024-12-13 12:11:39.879485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117698 ] 00:06:12.408 [2024-12-13 12:11:39.970698] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.408 [2024-12-13 12:11:39.970727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.408 [2024-12-13 12:11:40.020087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.408 [2024-12-13 12:11:40.020199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.408 [2024-12-13 12:11:40.020201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.345 [2024-12-13 12:11:40.727850] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117509 has claimed it. 00:06:13.345 request: 00:06:13.345 { 00:06:13.345 "method": "framework_enable_cpumask_locks", 00:06:13.345 "req_id": 1 00:06:13.345 } 00:06:13.345 Got JSON-RPC error response 00:06:13.345 response: 00:06:13.345 { 00:06:13.345 "code": -32603, 00:06:13.345 "message": "Failed to claim CPU core: 2" 00:06:13.345 } 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 117509 /var/tmp/spdk.sock 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117509 ']' 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 117698 /var/tmp/spdk2.sock 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117698 ']' 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.345 12:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.604 00:06:13.604 real 0m1.697s 00:06:13.604 user 0m0.848s 00:06:13.604 sys 0m0.141s 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.604 12:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.604 ************************************ 00:06:13.604 END TEST locking_overlapped_coremask_via_rpc 00:06:13.604 ************************************ 00:06:13.604 12:11:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.604 12:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 117509 ]] 00:06:13.604 12:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 117509 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117509 ']' 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117509 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117509 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117509' 00:06:13.604 killing process with pid 117509 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 117509 00:06:13.604 12:11:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 117509 00:06:13.863 12:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 117698 ]] 00:06:13.863 12:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 117698 00:06:13.863 12:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117698 ']' 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117698 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117698 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117698' 00:06:14.122 killing process with pid 117698 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 117698 00:06:14.122 12:11:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 117698 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 117509 ]] 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 117509 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117509 ']' 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117509 00:06:14.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (117509) - No such process 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 117509 is not found' 00:06:14.381 Process with pid 117509 is not found 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 117698 ]] 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 117698 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117698 ']' 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117698 00:06:14.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (117698) - No such process 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 117698 is not found' 00:06:14.381 Process with pid 117698 is not found 00:06:14.381 12:11:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.381 00:06:14.381 real 0m13.693s 00:06:14.381 user 0m24.015s 00:06:14.381 sys 0m4.881s 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.381 12:11:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.381 ************************************ 00:06:14.381 END TEST cpu_locks 00:06:14.381 ************************************ 00:06:14.381 00:06:14.381 real 0m38.456s 00:06:14.381 user 1m13.630s 00:06:14.381 sys 0m8.533s 00:06:14.381 12:11:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.381 12:11:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.381 ************************************ 00:06:14.381 END TEST event 00:06:14.381 ************************************ 00:06:14.381 12:11:41 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.381 12:11:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.382 12:11:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.382 12:11:41 -- common/autotest_common.sh@10 -- # set +x 00:06:14.382 ************************************ 00:06:14.382 START TEST thread 00:06:14.382 ************************************ 00:06:14.382 12:11:42 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.641 * Looking for test storage... 00:06:14.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.641 12:11:42 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.641 12:11:42 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.641 12:11:42 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.641 12:11:42 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.641 12:11:42 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.641 12:11:42 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.641 12:11:42 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.641 12:11:42 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.641 12:11:42 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.641 12:11:42 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.641 12:11:42 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.641 12:11:42 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:14.641 12:11:42 thread -- scripts/common.sh@345 -- # : 1 00:06:14.641 12:11:42 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.641 12:11:42 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.641 12:11:42 thread -- scripts/common.sh@365 -- # decimal 1 00:06:14.641 12:11:42 thread -- scripts/common.sh@353 -- # local d=1 00:06:14.641 12:11:42 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.641 12:11:42 thread -- scripts/common.sh@355 -- # echo 1 00:06:14.641 12:11:42 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.641 12:11:42 thread -- scripts/common.sh@366 -- # decimal 2 00:06:14.641 12:11:42 thread -- scripts/common.sh@353 -- # local d=2 00:06:14.641 12:11:42 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.641 12:11:42 thread -- scripts/common.sh@355 -- # echo 2 00:06:14.641 12:11:42 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.641 12:11:42 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.641 12:11:42 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.641 12:11:42 thread -- scripts/common.sh@368 -- # return 0 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.641 --rc genhtml_branch_coverage=1 00:06:14.641 --rc genhtml_function_coverage=1 00:06:14.641 --rc genhtml_legend=1 00:06:14.641 --rc geninfo_all_blocks=1 00:06:14.641 --rc geninfo_unexecuted_blocks=1 00:06:14.641 00:06:14.641 ' 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.641 --rc genhtml_branch_coverage=1 00:06:14.641 --rc genhtml_function_coverage=1 00:06:14.641 --rc genhtml_legend=1 00:06:14.641 --rc geninfo_all_blocks=1 00:06:14.641 --rc geninfo_unexecuted_blocks=1 00:06:14.641 00:06:14.641 ' 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.641 --rc genhtml_branch_coverage=1 00:06:14.641 --rc genhtml_function_coverage=1 00:06:14.641 --rc genhtml_legend=1 00:06:14.641 --rc geninfo_all_blocks=1 00:06:14.641 --rc geninfo_unexecuted_blocks=1 00:06:14.641 00:06:14.641 ' 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.641 --rc genhtml_branch_coverage=1 00:06:14.641 --rc genhtml_function_coverage=1 00:06:14.641 --rc genhtml_legend=1 00:06:14.641 --rc geninfo_all_blocks=1 00:06:14.641 --rc geninfo_unexecuted_blocks=1 00:06:14.641 00:06:14.641 ' 00:06:14.641 12:11:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.641 12:11:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.641 ************************************ 00:06:14.641 START TEST thread_poller_perf 00:06:14.641 ************************************ 00:06:14.641 12:11:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.641 [2024-12-13 12:11:42.249146] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:14.641 [2024-12-13 12:11:42.249216] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118117 ] 00:06:14.641 [2024-12-13 12:11:42.328918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.900 [2024-12-13 12:11:42.351830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.900 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.837 [2024-12-13T11:11:43.537Z] ====================================== 00:06:15.837 [2024-12-13T11:11:43.537Z] busy:2107294446 (cyc) 00:06:15.837 [2024-12-13T11:11:43.537Z] total_run_count: 419000 00:06:15.837 [2024-12-13T11:11:43.537Z] tsc_hz: 2100000000 (cyc) 00:06:15.837 [2024-12-13T11:11:43.537Z] ====================================== 00:06:15.837 [2024-12-13T11:11:43.537Z] poller_cost: 5029 (cyc), 2394 (nsec) 00:06:15.837 00:06:15.837 real 0m1.166s 00:06:15.837 user 0m1.078s 00:06:15.837 sys 0m0.085s 00:06:15.837 12:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.837 12:11:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.837 ************************************ 00:06:15.837 END TEST thread_poller_perf 00:06:15.837 ************************************ 00:06:15.837 12:11:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.837 12:11:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:15.837 12:11:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.837 12:11:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.837 ************************************ 00:06:15.837 START TEST thread_poller_perf 00:06:15.837 ************************************ 00:06:15.837 12:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.837 [2024-12-13 12:11:43.477692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:15.837 [2024-12-13 12:11:43.477763] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118288 ] 00:06:16.096 [2024-12-13 12:11:43.554437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.096 [2024-12-13 12:11:43.576385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.096 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.033 [2024-12-13T11:11:44.733Z] ====================================== 00:06:17.033 [2024-12-13T11:11:44.733Z] busy:2101602922 (cyc) 00:06:17.033 [2024-12-13T11:11:44.733Z] total_run_count: 5046000 00:06:17.033 [2024-12-13T11:11:44.733Z] tsc_hz: 2100000000 (cyc) 00:06:17.033 [2024-12-13T11:11:44.733Z] ====================================== 00:06:17.033 [2024-12-13T11:11:44.733Z] poller_cost: 416 (cyc), 198 (nsec) 00:06:17.033 00:06:17.033 real 0m1.154s 00:06:17.033 user 0m1.081s 00:06:17.033 sys 0m0.069s 00:06:17.033 12:11:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.033 12:11:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.033 ************************************ 00:06:17.033 END TEST thread_poller_perf 00:06:17.033 ************************************ 00:06:17.033 12:11:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.033 00:06:17.033 real 0m2.628s 00:06:17.033 user 0m2.319s 00:06:17.033 sys 0m0.321s 00:06:17.033 12:11:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.033 12:11:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.033 ************************************ 00:06:17.033 END TEST thread 00:06:17.033 ************************************ 00:06:17.033 12:11:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:17.033 12:11:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.033 12:11:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.033 12:11:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.033 12:11:44 -- common/autotest_common.sh@10 -- # set +x 00:06:17.033 ************************************ 00:06:17.033 START TEST app_cmdline 00:06:17.033 ************************************ 00:06:17.033 12:11:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.292 * Looking for test storage... 00:06:17.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.292 12:11:44 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.292 12:11:44 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.292 12:11:44 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.292 12:11:44 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.292 12:11:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.293 12:11:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.293 --rc genhtml_branch_coverage=1 00:06:17.293 --rc genhtml_function_coverage=1 00:06:17.293 --rc genhtml_legend=1 00:06:17.293 --rc geninfo_all_blocks=1 00:06:17.293 --rc geninfo_unexecuted_blocks=1 00:06:17.293 00:06:17.293 ' 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.293 --rc genhtml_branch_coverage=1 00:06:17.293 --rc genhtml_function_coverage=1 00:06:17.293 --rc genhtml_legend=1 00:06:17.293 --rc geninfo_all_blocks=1 00:06:17.293 --rc geninfo_unexecuted_blocks=1 00:06:17.293 00:06:17.293 ' 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.293 --rc genhtml_branch_coverage=1 00:06:17.293 --rc genhtml_function_coverage=1 00:06:17.293 --rc genhtml_legend=1 00:06:17.293 --rc geninfo_all_blocks=1 00:06:17.293 --rc geninfo_unexecuted_blocks=1 00:06:17.293 00:06:17.293 ' 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.293 --rc genhtml_branch_coverage=1 00:06:17.293 --rc genhtml_function_coverage=1 00:06:17.293 --rc genhtml_legend=1 00:06:17.293 --rc geninfo_all_blocks=1 00:06:17.293 --rc geninfo_unexecuted_blocks=1 00:06:17.293 00:06:17.293 ' 00:06:17.293 12:11:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:17.293 12:11:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=118601 00:06:17.293 12:11:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 118601 00:06:17.293 12:11:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 118601 ']' 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.293 12:11:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.293 [2024-12-13 12:11:44.947703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:17.293 [2024-12-13 12:11:44.947748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118601 ] 00:06:17.573 [2024-12-13 12:11:45.021015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.573 [2024-12-13 12:11:45.044077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.573 12:11:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.573 12:11:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:17.573 12:11:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:17.831 { 00:06:17.831 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:17.831 "fields": { 00:06:17.831 "major": 25, 00:06:17.831 "minor": 1, 00:06:17.831 "patch": 0, 00:06:17.831 "suffix": "-pre", 00:06:17.831 "commit": "e01cb43b8" 00:06:17.831 } 00:06:17.831 } 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.831 12:11:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:17.831 12:11:45 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.090 request: 00:06:18.090 { 00:06:18.090 "method": "env_dpdk_get_mem_stats", 00:06:18.090 "req_id": 1 00:06:18.090 } 00:06:18.090 Got JSON-RPC error response 00:06:18.090 response: 00:06:18.090 { 00:06:18.090 "code": -32601, 00:06:18.090 "message": "Method not found" 00:06:18.090 } 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.090 12:11:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 118601 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 118601 ']' 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 118601 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118601 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118601' 00:06:18.090 killing process with pid 118601 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 118601 00:06:18.090 12:11:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 118601 00:06:18.349 00:06:18.349 real 0m1.293s 00:06:18.349 user 0m1.508s 00:06:18.349 sys 0m0.437s 00:06:18.349 12:11:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.349 12:11:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.349 ************************************ 00:06:18.349 END TEST app_cmdline 00:06:18.349 ************************************ 00:06:18.349 12:11:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.349 12:11:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.349 12:11:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.349 12:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:18.609 ************************************ 00:06:18.609 START TEST version 00:06:18.609 ************************************ 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.609 * Looking for test storage... 00:06:18.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.609 12:11:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.609 12:11:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.609 12:11:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.609 12:11:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.609 12:11:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.609 12:11:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.609 12:11:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.609 12:11:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.609 12:11:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.609 12:11:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.609 12:11:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.609 12:11:46 version -- scripts/common.sh@344 -- # case "$op" in 00:06:18.609 12:11:46 version -- scripts/common.sh@345 -- # : 1 00:06:18.609 12:11:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.609 12:11:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.609 12:11:46 version -- scripts/common.sh@365 -- # decimal 1 00:06:18.609 12:11:46 version -- scripts/common.sh@353 -- # local d=1 00:06:18.609 12:11:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.609 12:11:46 version -- scripts/common.sh@355 -- # echo 1 00:06:18.609 12:11:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.609 12:11:46 version -- scripts/common.sh@366 -- # decimal 2 00:06:18.609 12:11:46 version -- scripts/common.sh@353 -- # local d=2 00:06:18.609 12:11:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.609 12:11:46 version -- scripts/common.sh@355 -- # echo 2 00:06:18.609 12:11:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.609 12:11:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.609 12:11:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.609 12:11:46 version -- scripts/common.sh@368 -- # return 0 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.609 --rc genhtml_branch_coverage=1 00:06:18.609 --rc genhtml_function_coverage=1 00:06:18.609 --rc genhtml_legend=1 00:06:18.609 --rc geninfo_all_blocks=1 00:06:18.609 --rc geninfo_unexecuted_blocks=1 00:06:18.609 00:06:18.609 ' 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.609 --rc genhtml_branch_coverage=1 00:06:18.609 --rc genhtml_function_coverage=1 00:06:18.609 --rc genhtml_legend=1 00:06:18.609 --rc geninfo_all_blocks=1 00:06:18.609 --rc geninfo_unexecuted_blocks=1 00:06:18.609 00:06:18.609 ' 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.609 --rc genhtml_branch_coverage=1 00:06:18.609 --rc genhtml_function_coverage=1 00:06:18.609 --rc genhtml_legend=1 00:06:18.609 --rc geninfo_all_blocks=1 00:06:18.609 --rc geninfo_unexecuted_blocks=1 00:06:18.609 00:06:18.609 ' 00:06:18.609 12:11:46 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.609 --rc genhtml_branch_coverage=1 00:06:18.609 --rc genhtml_function_coverage=1 00:06:18.609 --rc genhtml_legend=1 00:06:18.609 --rc geninfo_all_blocks=1 00:06:18.609 --rc geninfo_unexecuted_blocks=1 00:06:18.609 00:06:18.609 ' 00:06:18.609 12:11:46 version -- app/version.sh@17 -- # get_header_version major 00:06:18.609 12:11:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # cut -f2 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.609 12:11:46 version -- app/version.sh@17 -- # major=25 00:06:18.609 12:11:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:18.609 12:11:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # cut -f2 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.609 12:11:46 version -- app/version.sh@18 -- # minor=1 00:06:18.609 12:11:46 version -- app/version.sh@19 -- # get_header_version patch 00:06:18.609 12:11:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # cut -f2 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.609 12:11:46 version -- app/version.sh@19 -- # patch=0 00:06:18.609 12:11:46 version -- app/version.sh@20 -- # get_header_version suffix 00:06:18.609 12:11:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # cut -f2 00:06:18.609 12:11:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.609 12:11:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:18.609 12:11:46 version -- app/version.sh@22 -- # version=25.1 00:06:18.609 12:11:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:18.609 12:11:46 version -- app/version.sh@28 -- # version=25.1rc0 00:06:18.609 12:11:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.609 12:11:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:18.869 12:11:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:18.869 12:11:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:18.869 00:06:18.869 real 0m0.245s 00:06:18.869 user 0m0.149s 00:06:18.869 sys 0m0.140s 00:06:18.869 12:11:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.869 12:11:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:18.869 ************************************ 00:06:18.869 END TEST version 00:06:18.869 ************************************ 00:06:18.869 12:11:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:18.869 12:11:46 -- spdk/autotest.sh@194 -- # uname -s 00:06:18.869 12:11:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:18.869 12:11:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:18.869 12:11:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:18.869 12:11:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:18.869 12:11:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.869 12:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:18.869 12:11:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:18.869 12:11:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:18.869 12:11:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.869 12:11:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.869 12:11:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.869 12:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:18.869 ************************************ 00:06:18.869 START TEST nvmf_tcp 00:06:18.869 ************************************ 00:06:18.869 12:11:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.869 * Looking for test storage... 00:06:18.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:18.869 12:11:46 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.869 12:11:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.869 12:11:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.129 12:11:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 12:11:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:19.129 12:11:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.129 12:11:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.129 12:11:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.130 12:11:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.130 12:11:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.130 ************************************ 00:06:19.130 START TEST nvmf_target_core 00:06:19.130 ************************************ 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.130 * Looking for test storage... 00:06:19.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.130 --rc genhtml_branch_coverage=1 00:06:19.130 --rc genhtml_function_coverage=1 00:06:19.130 --rc genhtml_legend=1 00:06:19.130 --rc geninfo_all_blocks=1 00:06:19.130 --rc geninfo_unexecuted_blocks=1 00:06:19.130 00:06:19.130 ' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.130 --rc genhtml_branch_coverage=1 00:06:19.130 --rc genhtml_function_coverage=1 00:06:19.130 --rc genhtml_legend=1 00:06:19.130 --rc geninfo_all_blocks=1 00:06:19.130 --rc geninfo_unexecuted_blocks=1 00:06:19.130 00:06:19.130 ' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.130 --rc genhtml_branch_coverage=1 00:06:19.130 --rc genhtml_function_coverage=1 00:06:19.130 --rc genhtml_legend=1 00:06:19.130 --rc geninfo_all_blocks=1 00:06:19.130 --rc geninfo_unexecuted_blocks=1 00:06:19.130 00:06:19.130 ' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.130 --rc genhtml_branch_coverage=1 00:06:19.130 --rc genhtml_function_coverage=1 00:06:19.130 --rc genhtml_legend=1 00:06:19.130 --rc geninfo_all_blocks=1 00:06:19.130 --rc geninfo_unexecuted_blocks=1 00:06:19.130 00:06:19.130 ' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.130 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:19.390 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.391 ************************************ 00:06:19.391 START TEST nvmf_abort 00:06:19.391 ************************************ 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:19.391 * Looking for test storage... 00:06:19.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.391 12:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.391 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.392 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.651 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.651 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.651 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.651 12:11:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:26.222 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:26.222 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.222 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:26.223 Found net devices under 0000:af:00.0: cvl_0_0 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:26.223 Found net devices under 0000:af:00.1: cvl_0_1 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.223 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:06:26.223 00:06:26.223 --- 10.0.0.2 ping statistics --- 00:06:26.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.223 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:06:26.223 00:06:26.223 --- 10.0.0.1 ping statistics --- 00:06:26.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.223 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=122204 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 122204 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 122204 ']' 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.223 [2024-12-13 12:11:53.170668] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:26.223 [2024-12-13 12:11:53.170709] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.223 [2024-12-13 12:11:53.245897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.223 [2024-12-13 12:11:53.268806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.223 [2024-12-13 12:11:53.268844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.223 [2024-12-13 12:11:53.268851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.223 [2024-12-13 12:11:53.268857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.223 [2024-12-13 12:11:53.268863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.223 [2024-12-13 12:11:53.270092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.223 [2024-12-13 12:11:53.270198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.223 [2024-12-13 12:11:53.270199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.223 [2024-12-13 12:11:53.413285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.223 Malloc0 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.223 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.223 Delay0 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.224 [2024-12-13 12:11:53.487860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.224 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:26.224 [2024-12-13 12:11:53.621575] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:28.131 Initializing NVMe Controllers 00:06:28.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:28.131 controller IO queue size 128 less than required 00:06:28.131 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:28.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:28.131 Initialization complete. Launching workers. 00:06:28.131 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37988 00:06:28.131 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38049, failed to submit 62 00:06:28.131 success 37992, unsuccessful 57, failed 0 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.131 rmmod nvme_tcp 00:06:28.131 rmmod nvme_fabrics 00:06:28.131 rmmod nvme_keyring 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 122204 ']' 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 122204 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 122204 ']' 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 122204 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122204 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122204' 00:06:28.131 killing process with pid 122204 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 122204 00:06:28.131 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 122204 00:06:28.390 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.390 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.390 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.391 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:28.391 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.391 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:28.391 12:11:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.391 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.391 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.391 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.391 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.391 12:11:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.928 00:06:30.928 real 0m11.196s 00:06:30.928 user 0m11.730s 00:06:30.928 sys 0m5.276s 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.928 ************************************ 00:06:30.928 END TEST nvmf_abort 00:06:30.928 ************************************ 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.928 ************************************ 00:06:30.928 START TEST nvmf_ns_hotplug_stress 00:06:30.928 ************************************ 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:30.928 * Looking for test storage... 00:06:30.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.928 --rc genhtml_branch_coverage=1 00:06:30.928 --rc genhtml_function_coverage=1 00:06:30.928 --rc genhtml_legend=1 00:06:30.928 --rc geninfo_all_blocks=1 00:06:30.928 --rc geninfo_unexecuted_blocks=1 00:06:30.928 00:06:30.928 ' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.928 --rc genhtml_branch_coverage=1 00:06:30.928 --rc genhtml_function_coverage=1 00:06:30.928 --rc genhtml_legend=1 00:06:30.928 --rc geninfo_all_blocks=1 00:06:30.928 --rc geninfo_unexecuted_blocks=1 00:06:30.928 00:06:30.928 ' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.928 --rc genhtml_branch_coverage=1 00:06:30.928 --rc genhtml_function_coverage=1 00:06:30.928 --rc genhtml_legend=1 00:06:30.928 --rc geninfo_all_blocks=1 00:06:30.928 --rc geninfo_unexecuted_blocks=1 00:06:30.928 00:06:30.928 ' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.928 --rc genhtml_branch_coverage=1 00:06:30.928 --rc genhtml_function_coverage=1 00:06:30.928 --rc genhtml_legend=1 00:06:30.928 --rc geninfo_all_blocks=1 00:06:30.928 --rc geninfo_unexecuted_blocks=1 00:06:30.928 00:06:30.928 ' 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.928 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.929 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.500 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:37.500 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:37.500 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:37.500 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:37.501 Found net devices under 0000:af:00.0: cvl_0_0 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:37.501 Found net devices under 0000:af:00.1: cvl_0_1 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:37.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:06:37.501 00:06:37.501 --- 10.0.0.2 ping statistics --- 00:06:37.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.501 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:06:37.501 00:06:37.501 --- 10.0.0.1 ping statistics --- 00:06:37.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.501 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126381 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126381 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126381 ']' 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.501 [2024-12-13 12:12:04.347614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:37.501 [2024-12-13 12:12:04.347663] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.501 [2024-12-13 12:12:04.420414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.501 [2024-12-13 12:12:04.443227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.501 [2024-12-13 12:12:04.443261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.501 [2024-12-13 12:12:04.443268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.501 [2024-12-13 12:12:04.443274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.501 [2024-12-13 12:12:04.443279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.501 [2024-12-13 12:12:04.444558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.501 [2024-12-13 12:12:04.444666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.501 [2024-12-13 12:12:04.444667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.501 [2024-12-13 12:12:04.740897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.501 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.501 [2024-12-13 12:12:05.130276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.501 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.760 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:38.018 Malloc0 00:06:38.018 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.277 Delay0 00:06:38.277 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.277 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:38.535 NULL1 00:06:38.535 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:38.794 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:38.794 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=126740 00:06:38.794 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:38.794 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.052 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.311 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:39.311 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:39.311 true 00:06:39.311 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:39.311 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.570 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.829 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:39.829 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:40.087 true 00:06:40.087 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:40.087 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.346 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.346 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:40.346 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:40.604 true 00:06:40.604 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:40.604 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.862 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.121 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:41.121 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:41.379 true 00:06:41.379 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:41.379 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.637 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.637 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:41.637 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:41.895 true 00:06:41.895 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:41.895 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.154 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.412 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:42.412 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:42.670 true 00:06:42.670 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:42.670 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.929 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.187 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:43.187 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:43.187 true 00:06:43.187 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:43.187 12:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.445 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.704 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:43.704 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:43.962 true 00:06:43.962 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:43.962 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.221 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.479 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:44.479 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:44.479 true 00:06:44.479 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:44.480 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.738 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.996 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:44.996 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:45.255 true 00:06:45.255 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:45.255 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.514 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.772 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:45.772 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:45.772 true 00:06:45.772 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:45.772 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.031 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.289 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:46.289 12:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:46.548 true 00:06:46.548 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:46.548 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.807 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.065 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:47.065 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:47.065 true 00:06:47.065 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:47.065 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.324 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.582 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:47.582 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:47.841 true 00:06:47.841 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:47.841 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.099 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.358 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:48.358 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:48.358 true 00:06:48.358 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:48.358 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.618 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.876 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:48.876 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:49.135 true 00:06:49.135 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:49.135 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.393 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.652 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:49.652 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:49.652 true 00:06:49.652 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:49.652 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.910 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.169 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:50.169 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:50.428 true 00:06:50.428 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:50.428 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.687 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.945 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:50.945 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:50.945 true 00:06:50.945 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:50.945 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.204 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.462 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:51.462 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:51.721 true 00:06:51.721 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:51.721 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.980 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.238 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:52.238 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:52.238 true 00:06:52.238 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:52.238 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.497 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.756 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:52.756 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:53.014 true 00:06:53.014 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:53.014 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.273 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.532 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:53.532 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:53.532 true 00:06:53.532 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:53.532 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.791 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.049 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:54.049 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:54.308 true 00:06:54.308 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:54.308 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.567 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.825 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:54.825 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:54.825 true 00:06:54.825 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:54.825 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.084 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.343 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:55.343 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:55.601 true 00:06:55.601 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:55.601 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.860 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.119 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:56.119 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:56.119 true 00:06:56.378 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:56.378 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.378 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.636 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:56.636 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:56.895 true 00:06:56.895 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:56.895 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.153 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.412 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:57.412 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:57.412 true 00:06:57.669 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:57.669 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.669 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.927 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:57.927 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:58.185 true 00:06:58.185 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:58.185 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.444 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.703 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:58.703 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:58.703 true 00:06:58.961 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:58.961 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.961 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.220 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:59.220 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:59.478 true 00:06:59.478 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:06:59.478 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.737 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.995 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:59.995 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:00.252 true 00:07:00.252 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:00.252 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.252 12:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.529 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:00.530 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:00.788 true 00:07:00.788 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:00.788 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.047 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.305 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:01.305 12:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:01.305 true 00:07:01.564 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:01.564 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.564 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.822 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:01.822 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:02.081 true 00:07:02.081 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:02.081 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.340 12:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.599 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:02.599 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:02.858 true 00:07:02.858 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:02.858 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.858 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.118 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:03.118 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:03.377 true 00:07:03.377 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:03.377 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.646 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.905 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:03.905 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:03.905 true 00:07:04.164 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:04.164 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.164 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.423 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:04.423 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:04.682 true 00:07:04.682 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:04.682 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.941 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.200 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:05.200 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:05.200 true 00:07:05.459 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:05.459 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.459 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.718 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:05.718 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:05.977 true 00:07:05.977 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:05.977 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.237 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.496 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:06.496 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:06.755 true 00:07:06.755 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:06.755 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.755 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.014 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:07.014 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:07.273 true 00:07:07.273 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:07.273 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.532 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.791 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:07.791 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:07.791 true 00:07:08.049 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:08.049 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.050 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.309 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:08.309 12:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:08.568 true 00:07:08.568 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:08.568 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.827 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.086 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:09.086 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:09.086 Initializing NVMe Controllers 00:07:09.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.086 Controller IO queue size 128, less than required. 00:07:09.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:09.086 Initialization complete. Launching workers. 00:07:09.086 ======================================================== 00:07:09.086 Latency(us) 00:07:09.086 Device Information : IOPS MiB/s Average min max 00:07:09.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27685.56 13.52 4623.27 2302.53 8578.78 00:07:09.086 ======================================================== 00:07:09.086 Total : 27685.56 13.52 4623.27 2302.53 8578.78 00:07:09.086 00:07:09.086 true 00:07:09.346 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126740 00:07:09.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (126740) - No such process 00:07:09.346 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 126740 00:07:09.346 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.346 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.606 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:09.606 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:09.606 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:09.606 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.606 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:09.865 null0 00:07:09.865 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.865 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.865 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:09.865 null1 00:07:10.124 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.124 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.124 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:10.124 null2 00:07:10.124 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.124 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.124 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:10.383 null3 00:07:10.383 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.383 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.383 12:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:10.643 null4 00:07:10.643 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.643 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.643 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:10.902 null5 00:07:10.902 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.902 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.902 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:10.902 null6 00:07:10.902 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.902 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.902 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:11.162 null7 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:11.162 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 132648 132649 132651 132653 132655 132657 132658 132660 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.163 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.423 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.423 12:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.423 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.423 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.423 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.423 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.423 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.423 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.682 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.942 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.943 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.202 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.203 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.203 12:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.462 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.722 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.981 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.982 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.241 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.241 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.242 12:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.501 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.761 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.026 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.287 12:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.548 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.808 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.068 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.327 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.327 rmmod nvme_tcp 00:07:15.587 rmmod nvme_fabrics 00:07:15.587 rmmod nvme_keyring 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126381 ']' 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126381 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126381 ']' 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126381 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126381 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126381' 00:07:15.587 killing process with pid 126381 00:07:15.587 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126381 00:07:15.588 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126381 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.848 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:17.757 00:07:17.757 real 0m47.247s 00:07:17.757 user 3m22.503s 00:07:17.757 sys 0m17.155s 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.757 ************************************ 00:07:17.757 END TEST nvmf_ns_hotplug_stress 00:07:17.757 ************************************ 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.757 12:12:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.018 ************************************ 00:07:18.018 START TEST nvmf_delete_subsystem 00:07:18.018 ************************************ 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.018 * Looking for test storage... 00:07:18.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.018 --rc genhtml_branch_coverage=1 00:07:18.018 --rc genhtml_function_coverage=1 00:07:18.018 --rc genhtml_legend=1 00:07:18.018 --rc geninfo_all_blocks=1 00:07:18.018 --rc geninfo_unexecuted_blocks=1 00:07:18.018 00:07:18.018 ' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.018 --rc genhtml_branch_coverage=1 00:07:18.018 --rc genhtml_function_coverage=1 00:07:18.018 --rc genhtml_legend=1 00:07:18.018 --rc geninfo_all_blocks=1 00:07:18.018 --rc geninfo_unexecuted_blocks=1 00:07:18.018 00:07:18.018 ' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.018 --rc genhtml_branch_coverage=1 00:07:18.018 --rc genhtml_function_coverage=1 00:07:18.018 --rc genhtml_legend=1 00:07:18.018 --rc geninfo_all_blocks=1 00:07:18.018 --rc geninfo_unexecuted_blocks=1 00:07:18.018 00:07:18.018 ' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.018 --rc genhtml_branch_coverage=1 00:07:18.018 --rc genhtml_function_coverage=1 00:07:18.018 --rc genhtml_legend=1 00:07:18.018 --rc geninfo_all_blocks=1 00:07:18.018 --rc geninfo_unexecuted_blocks=1 00:07:18.018 00:07:18.018 ' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.018 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.019 12:12:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:24.597 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.597 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:24.598 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:24.598 Found net devices under 0000:af:00.0: cvl_0_0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:24.598 Found net devices under 0000:af:00.1: cvl_0_1 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:07:24.598 00:07:24.598 --- 10.0.0.2 ping statistics --- 00:07:24.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.598 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:24.598 00:07:24.598 --- 10.0.0.1 ping statistics --- 00:07:24.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.598 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=137080 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 137080 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 137080 ']' 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.598 [2024-12-13 12:12:51.784929] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:24.598 [2024-12-13 12:12:51.784969] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.598 [2024-12-13 12:12:51.860647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.598 [2024-12-13 12:12:51.881562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.598 [2024-12-13 12:12:51.881600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.598 [2024-12-13 12:12:51.881607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.598 [2024-12-13 12:12:51.881613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.598 [2024-12-13 12:12:51.881618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.598 [2024-12-13 12:12:51.882752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.598 [2024-12-13 12:12:51.882753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.598 12:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 [2024-12-13 12:12:52.021580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 [2024-12-13 12:12:52.045798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 NULL1 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 Delay0 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=137198 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:24.599 12:12:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:24.599 [2024-12-13 12:12:52.152484] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:26.504 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.504 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.504 12:12:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 starting I/O failed: -6 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 [2024-12-13 12:12:54.271213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314c60 is same with the state(6) to be set 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Write completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.764 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Write completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 Read completed with error (sct=0, sc=8) 00:07:26.765 starting I/O failed: -6 00:07:26.765 [2024-12-13 12:12:54.272062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b8000d4d0 is same with the state(6) to be set 00:07:27.703 [2024-12-13 12:12:55.246656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312260 is same with the state(6) to be set 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 [2024-12-13 12:12:55.274187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b80000c80 is same with the state(6) to be set 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 [2024-12-13 12:12:55.274750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b8000d800 is same with the state(6) to be set 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 [2024-12-13 12:12:55.274913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b8000d060 is same with the state(6) to be set 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Write completed with error (sct=0, sc=8) 00:07:27.703 Read completed with error (sct=0, sc=8) 00:07:27.703 [2024-12-13 12:12:55.275411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13695f0 is same with the state(6) to be set 00:07:27.703 Initializing NVMe Controllers 00:07:27.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:27.703 Controller IO queue size 128, less than required. 00:07:27.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:27.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:27.704 Initialization complete. Launching workers. 00:07:27.704 ======================================================== 00:07:27.704 Latency(us) 00:07:27.704 Device Information : IOPS MiB/s Average min max 00:07:27.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.59 0.07 887313.77 237.88 1009496.33 00:07:27.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.01 0.08 1090441.13 422.07 2003130.66 00:07:27.704 ======================================================== 00:07:27.704 Total : 319.60 0.16 993458.08 237.88 2003130.66 00:07:27.704 00:07:27.704 [2024-12-13 12:12:55.276115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1312260 (9): Bad file descriptor 00:07:27.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:27.704 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.704 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:27.704 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137198 00:07:27.704 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137198 00:07:28.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (137198) - No such process 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 137198 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 137198 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 137198 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.272 [2024-12-13 12:12:55.804006] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=137788 00:07:28.272 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:28.273 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:28.273 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:28.273 12:12:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.273 [2024-12-13 12:12:55.885567] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:28.841 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.841 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:28.841 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.410 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.410 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:29.410 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.669 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.669 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:29.669 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.238 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.238 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:30.238 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.817 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.817 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:30.817 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.386 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.386 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:31.386 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.386 Initializing NVMe Controllers 00:07:31.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.386 Controller IO queue size 128, less than required. 00:07:31.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:31.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:31.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:31.386 Initialization complete. Launching workers. 00:07:31.386 ======================================================== 00:07:31.386 Latency(us) 00:07:31.386 Device Information : IOPS MiB/s Average min max 00:07:31.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002077.86 1000171.40 1006365.25 00:07:31.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004103.01 1000149.01 1042199.63 00:07:31.386 ======================================================== 00:07:31.386 Total : 256.00 0.12 1003090.44 1000149.01 1042199.63 00:07:31.386 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137788 00:07:31.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (137788) - No such process 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 137788 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.955 rmmod nvme_tcp 00:07:31.955 rmmod nvme_fabrics 00:07:31.955 rmmod nvme_keyring 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 137080 ']' 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 137080 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 137080 ']' 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 137080 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137080 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137080' 00:07:31.955 killing process with pid 137080 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 137080 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 137080 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:31.955 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.956 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.956 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.956 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.956 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.956 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.956 12:12:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.495 00:07:34.495 real 0m16.236s 00:07:34.495 user 0m29.298s 00:07:34.495 sys 0m5.405s 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.495 ************************************ 00:07:34.495 END TEST nvmf_delete_subsystem 00:07:34.495 ************************************ 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.495 ************************************ 00:07:34.495 START TEST nvmf_host_management 00:07:34.495 ************************************ 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:34.495 * Looking for test storage... 00:07:34.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.495 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.496 --rc genhtml_branch_coverage=1 00:07:34.496 --rc genhtml_function_coverage=1 00:07:34.496 --rc genhtml_legend=1 00:07:34.496 --rc geninfo_all_blocks=1 00:07:34.496 --rc geninfo_unexecuted_blocks=1 00:07:34.496 00:07:34.496 ' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.496 --rc genhtml_branch_coverage=1 00:07:34.496 --rc genhtml_function_coverage=1 00:07:34.496 --rc genhtml_legend=1 00:07:34.496 --rc geninfo_all_blocks=1 00:07:34.496 --rc geninfo_unexecuted_blocks=1 00:07:34.496 00:07:34.496 ' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.496 --rc genhtml_branch_coverage=1 00:07:34.496 --rc genhtml_function_coverage=1 00:07:34.496 --rc genhtml_legend=1 00:07:34.496 --rc geninfo_all_blocks=1 00:07:34.496 --rc geninfo_unexecuted_blocks=1 00:07:34.496 00:07:34.496 ' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.496 --rc genhtml_branch_coverage=1 00:07:34.496 --rc genhtml_function_coverage=1 00:07:34.496 --rc genhtml_legend=1 00:07:34.496 --rc geninfo_all_blocks=1 00:07:34.496 --rc geninfo_unexecuted_blocks=1 00:07:34.496 00:07:34.496 ' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.496 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.497 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.497 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.497 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.497 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.497 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.497 12:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.074 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.074 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.075 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.075 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.075 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.075 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.075 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:07:41.075 00:07:41.075 --- 10.0.0.2 ping statistics --- 00:07:41.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.076 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:41.076 00:07:41.076 --- 10.0.0.1 ping statistics --- 00:07:41.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.076 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=141815 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 141815 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 141815 ']' 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.076 12:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 [2024-12-13 12:13:08.002653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:41.076 [2024-12-13 12:13:08.002696] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.076 [2024-12-13 12:13:08.081567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.076 [2024-12-13 12:13:08.105433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.076 [2024-12-13 12:13:08.105472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.076 [2024-12-13 12:13:08.105480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.076 [2024-12-13 12:13:08.105486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.076 [2024-12-13 12:13:08.105491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.076 [2024-12-13 12:13:08.106963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.076 [2024-12-13 12:13:08.107072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.076 [2024-12-13 12:13:08.107155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.076 [2024-12-13 12:13:08.107156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 [2024-12-13 12:13:08.246986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 Malloc0 00:07:41.076 [2024-12-13 12:13:08.326870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=142069 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 142069 /var/tmp/bdevperf.sock 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142069 ']' 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:41.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.076 { 00:07:41.076 "params": { 00:07:41.076 "name": "Nvme$subsystem", 00:07:41.076 "trtype": "$TEST_TRANSPORT", 00:07:41.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.076 "adrfam": "ipv4", 00:07:41.076 "trsvcid": "$NVMF_PORT", 00:07:41.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.076 "hdgst": ${hdgst:-false}, 00:07:41.076 "ddgst": ${ddgst:-false} 00:07:41.076 }, 00:07:41.076 "method": "bdev_nvme_attach_controller" 00:07:41.076 } 00:07:41.076 EOF 00:07:41.076 )") 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:41.076 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.076 "params": { 00:07:41.076 "name": "Nvme0", 00:07:41.076 "trtype": "tcp", 00:07:41.076 "traddr": "10.0.0.2", 00:07:41.076 "adrfam": "ipv4", 00:07:41.076 "trsvcid": "4420", 00:07:41.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:41.076 "hdgst": false, 00:07:41.076 "ddgst": false 00:07:41.076 }, 00:07:41.076 "method": "bdev_nvme_attach_controller" 00:07:41.076 }' 00:07:41.076 [2024-12-13 12:13:08.422984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:41.076 [2024-12-13 12:13:08.423026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142069 ] 00:07:41.076 [2024-12-13 12:13:08.496831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.076 [2024-12-13 12:13:08.519150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.337 Running I/O for 10 seconds... 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:07:41.337 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.599 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.599 [2024-12-13 12:13:09.238560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.599 [2024-12-13 12:13:09.238729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.238995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.239001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.239007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862590 is same with the state(6) to be set 00:07:41.600 [2024-12-13 12:13:09.239196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.600 [2024-12-13 12:13:09.239465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.600 [2024-12-13 12:13:09.239473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.239987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.239995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.240002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.240010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.240017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.240025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.240040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.240046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.240054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.601 [2024-12-13 12:13:09.240060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.601 [2024-12-13 12:13:09.240067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.602 [2024-12-13 12:13:09.240173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.240182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0dea0 is same with the state(6) to be set 00:07:41.602 [2024-12-13 12:13:09.241145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:41.602 task offset: 98304 on job bdev=Nvme0n1 fails 00:07:41.602 00:07:41.602 Latency(us) 00:07:41.602 [2024-12-13T11:13:09.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.602 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.602 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:41.602 Verification LBA range: start 0x0 length 0x400 00:07:41.602 Nvme0n1 : 0.41 1891.79 118.24 157.65 0.00 30407.26 3651.29 26588.89 00:07:41.602 [2024-12-13T11:13:09.302Z] =================================================================================================================== 00:07:41.602 [2024-12-13T11:13:09.302Z] Total : 1891.79 118.24 157.65 0.00 30407.26 3651.29 26588.89 00:07:41.602 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.602 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.602 [2024-12-13 12:13:09.243498] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.602 [2024-12-13 12:13:09.243520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb11d40 (9): Bad file descriptor 00:07:41.602 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.602 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 [2024-12-13 12:13:09.249679] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:41.602 [2024-12-13 12:13:09.249759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:41.602 [2024-12-13 12:13:09.249786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.602 [2024-12-13 12:13:09.249799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:41.602 [2024-12-13 12:13:09.249806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:41.602 [2024-12-13 12:13:09.249814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:41.602 [2024-12-13 12:13:09.249821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb11d40 00:07:41.602 [2024-12-13 12:13:09.249840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb11d40 (9): Bad file descriptor 00:07:41.602 [2024-12-13 12:13:09.249852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:41.602 [2024-12-13 12:13:09.249858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:41.602 [2024-12-13 12:13:09.249867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:41.602 [2024-12-13 12:13:09.249875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:41.602 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.602 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 142069 00:07:42.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (142069) - No such process 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:42.982 { 00:07:42.982 "params": { 00:07:42.982 "name": "Nvme$subsystem", 00:07:42.982 "trtype": "$TEST_TRANSPORT", 00:07:42.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:42.982 "adrfam": "ipv4", 00:07:42.982 "trsvcid": "$NVMF_PORT", 00:07:42.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:42.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:42.982 "hdgst": ${hdgst:-false}, 00:07:42.982 "ddgst": ${ddgst:-false} 00:07:42.982 }, 00:07:42.982 "method": "bdev_nvme_attach_controller" 00:07:42.982 } 00:07:42.982 EOF 00:07:42.982 )") 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:42.982 12:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:42.982 "params": { 00:07:42.982 "name": "Nvme0", 00:07:42.982 "trtype": "tcp", 00:07:42.982 "traddr": "10.0.0.2", 00:07:42.982 "adrfam": "ipv4", 00:07:42.982 "trsvcid": "4420", 00:07:42.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:42.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:42.982 "hdgst": false, 00:07:42.982 "ddgst": false 00:07:42.982 }, 00:07:42.982 "method": "bdev_nvme_attach_controller" 00:07:42.982 }' 00:07:42.982 [2024-12-13 12:13:10.308185] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:42.982 [2024-12-13 12:13:10.308231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142316 ] 00:07:42.982 [2024-12-13 12:13:10.383069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.982 [2024-12-13 12:13:10.404984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.982 Running I/O for 1 seconds... 00:07:43.921 1984.00 IOPS, 124.00 MiB/s 00:07:43.921 Latency(us) 00:07:43.921 [2024-12-13T11:13:11.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:43.921 Verification LBA range: start 0x0 length 0x400 00:07:43.921 Nvme0n1 : 1.01 2028.73 126.80 0.00 0.00 31057.95 5118.05 26588.89 00:07:43.921 [2024-12-13T11:13:11.621Z] =================================================================================================================== 00:07:43.921 [2024-12-13T11:13:11.621Z] Total : 2028.73 126.80 0.00 0.00 31057.95 5118.05 26588.89 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.180 rmmod nvme_tcp 00:07:44.180 rmmod nvme_fabrics 00:07:44.180 rmmod nvme_keyring 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 141815 ']' 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 141815 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 141815 ']' 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 141815 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.180 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141815 00:07:44.440 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:44.440 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:44.440 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141815' 00:07:44.440 killing process with pid 141815 00:07:44.440 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 141815 00:07:44.440 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 141815 00:07:44.440 [2024-12-13 12:13:12.043318] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.440 12:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:46.982 00:07:46.982 real 0m12.381s 00:07:46.982 user 0m19.703s 00:07:46.982 sys 0m5.603s 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 ************************************ 00:07:46.982 END TEST nvmf_host_management 00:07:46.982 ************************************ 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 ************************************ 00:07:46.982 START TEST nvmf_lvol 00:07:46.982 ************************************ 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.982 * Looking for test storage... 00:07:46.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.982 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.983 --rc genhtml_branch_coverage=1 00:07:46.983 --rc genhtml_function_coverage=1 00:07:46.983 --rc genhtml_legend=1 00:07:46.983 --rc geninfo_all_blocks=1 00:07:46.983 --rc geninfo_unexecuted_blocks=1 00:07:46.983 00:07:46.983 ' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.983 --rc genhtml_branch_coverage=1 00:07:46.983 --rc genhtml_function_coverage=1 00:07:46.983 --rc genhtml_legend=1 00:07:46.983 --rc geninfo_all_blocks=1 00:07:46.983 --rc geninfo_unexecuted_blocks=1 00:07:46.983 00:07:46.983 ' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.983 --rc genhtml_branch_coverage=1 00:07:46.983 --rc genhtml_function_coverage=1 00:07:46.983 --rc genhtml_legend=1 00:07:46.983 --rc geninfo_all_blocks=1 00:07:46.983 --rc geninfo_unexecuted_blocks=1 00:07:46.983 00:07:46.983 ' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.983 --rc genhtml_branch_coverage=1 00:07:46.983 --rc genhtml_function_coverage=1 00:07:46.983 --rc genhtml_legend=1 00:07:46.983 --rc geninfo_all_blocks=1 00:07:46.983 --rc geninfo_unexecuted_blocks=1 00:07:46.983 00:07:46.983 ' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.983 12:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:53.569 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:53.569 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.569 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:53.570 Found net devices under 0000:af:00.0: cvl_0_0 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:53.570 Found net devices under 0000:af:00.1: cvl_0_1 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:07:53.570 00:07:53.570 --- 10.0.0.2 ping statistics --- 00:07:53.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.570 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:07:53.570 00:07:53.570 --- 10.0.0.1 ping statistics --- 00:07:53.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.570 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=146034 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 146034 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 146034 ']' 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.570 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.571 [2024-12-13 12:13:20.481456] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:53.571 [2024-12-13 12:13:20.481528] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.571 [2024-12-13 12:13:20.568984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.571 [2024-12-13 12:13:20.591675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.571 [2024-12-13 12:13:20.591710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.571 [2024-12-13 12:13:20.591717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.571 [2024-12-13 12:13:20.591723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.571 [2024-12-13 12:13:20.591729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.571 [2024-12-13 12:13:20.592918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.571 [2024-12-13 12:13:20.593032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.571 [2024-12-13 12:13:20.593033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.571 [2024-12-13 12:13:20.885913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.571 12:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.571 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:53.571 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.832 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:53.832 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.091 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:54.091 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f49ee874-d577-4396-a98b-7c159da5aadc 00:07:54.091 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f49ee874-d577-4396-a98b-7c159da5aadc lvol 20 00:07:54.351 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=513629b2-e7ff-475e-b217-bd1a9ed61ae6 00:07:54.351 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.610 12:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 513629b2-e7ff-475e-b217-bd1a9ed61ae6 00:07:54.870 12:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:54.870 [2024-12-13 12:13:22.532884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.870 12:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.129 12:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:55.129 12:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=146504 00:07:55.129 12:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:56.067 12:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 513629b2-e7ff-475e-b217-bd1a9ed61ae6 MY_SNAPSHOT 00:07:56.327 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d505a74a-e4a7-4432-9d60-1d440b98d296 00:07:56.327 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 513629b2-e7ff-475e-b217-bd1a9ed61ae6 30 00:07:56.586 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d505a74a-e4a7-4432-9d60-1d440b98d296 MY_CLONE 00:07:56.845 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0f423dc9-b2fe-4247-aac7-3f35cb2bd366 00:07:56.845 12:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0f423dc9-b2fe-4247-aac7-3f35cb2bd366 00:07:57.415 12:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 146504 00:08:07.403 Initializing NVMe Controllers 00:08:07.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.403 Controller IO queue size 128, less than required. 00:08:07.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:07.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:07.403 Initialization complete. Launching workers. 00:08:07.403 ======================================================== 00:08:07.403 Latency(us) 00:08:07.403 Device Information : IOPS MiB/s Average min max 00:08:07.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12366.19 48.31 10356.86 1313.39 58085.56 00:08:07.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12219.60 47.73 10477.10 3434.38 48287.02 00:08:07.403 ======================================================== 00:08:07.403 Total : 24585.79 96.04 10416.62 1313.39 58085.56 00:08:07.403 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 513629b2-e7ff-475e-b217-bd1a9ed61ae6 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f49ee874-d577-4396-a98b-7c159da5aadc 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.403 rmmod nvme_tcp 00:08:07.403 rmmod nvme_fabrics 00:08:07.403 rmmod nvme_keyring 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:07.403 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 146034 ']' 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 146034 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 146034 ']' 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 146034 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.404 12:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146034 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146034' 00:08:07.404 killing process with pid 146034 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 146034 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 146034 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.404 12:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.783 00:08:08.783 real 0m22.050s 00:08:08.783 user 1m3.411s 00:08:08.783 sys 0m7.712s 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.783 ************************************ 00:08:08.783 END TEST nvmf_lvol 00:08:08.783 ************************************ 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.783 ************************************ 00:08:08.783 START TEST nvmf_lvs_grow 00:08:08.783 ************************************ 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.783 * Looking for test storage... 00:08:08.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.783 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:09.044 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.045 --rc genhtml_branch_coverage=1 00:08:09.045 --rc genhtml_function_coverage=1 00:08:09.045 --rc genhtml_legend=1 00:08:09.045 --rc geninfo_all_blocks=1 00:08:09.045 --rc geninfo_unexecuted_blocks=1 00:08:09.045 00:08:09.045 ' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.045 --rc genhtml_branch_coverage=1 00:08:09.045 --rc genhtml_function_coverage=1 00:08:09.045 --rc genhtml_legend=1 00:08:09.045 --rc geninfo_all_blocks=1 00:08:09.045 --rc geninfo_unexecuted_blocks=1 00:08:09.045 00:08:09.045 ' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.045 --rc genhtml_branch_coverage=1 00:08:09.045 --rc genhtml_function_coverage=1 00:08:09.045 --rc genhtml_legend=1 00:08:09.045 --rc geninfo_all_blocks=1 00:08:09.045 --rc geninfo_unexecuted_blocks=1 00:08:09.045 00:08:09.045 ' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.045 --rc genhtml_branch_coverage=1 00:08:09.045 --rc genhtml_function_coverage=1 00:08:09.045 --rc genhtml_legend=1 00:08:09.045 --rc geninfo_all_blocks=1 00:08:09.045 --rc geninfo_unexecuted_blocks=1 00:08:09.045 00:08:09.045 ' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.045 12:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:15.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:15.639 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:15.639 Found net devices under 0000:af:00.0: cvl_0_0 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:15.639 Found net devices under 0000:af:00.1: cvl_0_1 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.639 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:08:15.640 00:08:15.640 --- 10.0.0.2 ping statistics --- 00:08:15.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.640 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:08:15.640 00:08:15.640 --- 10.0.0.1 ping statistics --- 00:08:15.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.640 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=151929 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 151929 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 151929 ']' 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.640 [2024-12-13 12:13:42.589880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:15.640 [2024-12-13 12:13:42.589925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.640 [2024-12-13 12:13:42.664706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.640 [2024-12-13 12:13:42.685297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.640 [2024-12-13 12:13:42.685333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.640 [2024-12-13 12:13:42.685340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.640 [2024-12-13 12:13:42.685345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.640 [2024-12-13 12:13:42.685350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.640 [2024-12-13 12:13:42.685868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.640 12:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.640 [2024-12-13 12:13:42.992876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.640 ************************************ 00:08:15.640 START TEST lvs_grow_clean 00:08:15.640 ************************************ 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:15.640 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:15.901 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:15.901 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:15.901 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:16.160 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:16.160 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:16.160 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 57ba39ac-1e14-43f4-9a87-402d91296f70 lvol 150 00:08:16.160 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3888688b-91c8-485c-b415-d01de7acaeda 00:08:16.160 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.160 12:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:16.420 [2024-12-13 12:13:43.998595] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:16.420 [2024-12-13 12:13:43.998647] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:16.420 true 00:08:16.420 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:16.420 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:16.680 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:16.680 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:16.680 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3888688b-91c8-485c-b415-d01de7acaeda 00:08:16.940 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:17.200 [2024-12-13 12:13:44.736835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.200 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152268 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152268 /var/tmp/bdevperf.sock 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152268 ']' 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.460 12:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.460 [2024-12-13 12:13:44.989566] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:17.460 [2024-12-13 12:13:44.989610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152268 ] 00:08:17.460 [2024-12-13 12:13:45.063417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.460 [2024-12-13 12:13:45.085152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.719 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.719 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:17.719 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:17.979 Nvme0n1 00:08:17.979 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:18.238 [ 00:08:18.238 { 00:08:18.238 "name": "Nvme0n1", 00:08:18.238 "aliases": [ 00:08:18.238 "3888688b-91c8-485c-b415-d01de7acaeda" 00:08:18.238 ], 00:08:18.238 "product_name": "NVMe disk", 00:08:18.238 "block_size": 4096, 00:08:18.238 "num_blocks": 38912, 00:08:18.238 "uuid": "3888688b-91c8-485c-b415-d01de7acaeda", 00:08:18.238 "numa_id": 1, 00:08:18.238 "assigned_rate_limits": { 00:08:18.238 "rw_ios_per_sec": 0, 00:08:18.238 "rw_mbytes_per_sec": 0, 00:08:18.238 "r_mbytes_per_sec": 0, 00:08:18.238 "w_mbytes_per_sec": 0 00:08:18.238 }, 00:08:18.238 "claimed": false, 00:08:18.238 "zoned": false, 00:08:18.238 "supported_io_types": { 00:08:18.238 "read": true, 00:08:18.239 "write": true, 00:08:18.239 "unmap": true, 00:08:18.239 "flush": true, 00:08:18.239 "reset": true, 00:08:18.239 "nvme_admin": true, 00:08:18.239 "nvme_io": true, 00:08:18.239 "nvme_io_md": false, 00:08:18.239 "write_zeroes": true, 00:08:18.239 "zcopy": false, 00:08:18.239 "get_zone_info": false, 00:08:18.239 "zone_management": false, 00:08:18.239 "zone_append": false, 00:08:18.239 "compare": true, 00:08:18.239 "compare_and_write": true, 00:08:18.239 "abort": true, 00:08:18.239 "seek_hole": false, 00:08:18.239 "seek_data": false, 00:08:18.239 "copy": true, 00:08:18.239 "nvme_iov_md": false 00:08:18.239 }, 00:08:18.239 "memory_domains": [ 00:08:18.239 { 00:08:18.239 "dma_device_id": "system", 00:08:18.239 "dma_device_type": 1 00:08:18.239 } 00:08:18.239 ], 00:08:18.239 "driver_specific": { 00:08:18.239 "nvme": [ 00:08:18.239 { 00:08:18.239 "trid": { 00:08:18.239 "trtype": "TCP", 00:08:18.239 "adrfam": "IPv4", 00:08:18.239 "traddr": "10.0.0.2", 00:08:18.239 "trsvcid": "4420", 00:08:18.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:18.239 }, 00:08:18.239 "ctrlr_data": { 00:08:18.239 "cntlid": 1, 00:08:18.239 "vendor_id": "0x8086", 00:08:18.239 "model_number": "SPDK bdev Controller", 00:08:18.239 "serial_number": "SPDK0", 00:08:18.239 "firmware_revision": "25.01", 00:08:18.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.239 "oacs": { 00:08:18.239 "security": 0, 00:08:18.239 "format": 0, 00:08:18.239 "firmware": 0, 00:08:18.239 "ns_manage": 0 00:08:18.239 }, 00:08:18.239 "multi_ctrlr": true, 00:08:18.239 "ana_reporting": false 00:08:18.239 }, 00:08:18.239 "vs": { 00:08:18.239 "nvme_version": "1.3" 00:08:18.239 }, 00:08:18.239 "ns_data": { 00:08:18.239 "id": 1, 00:08:18.239 "can_share": true 00:08:18.239 } 00:08:18.239 } 00:08:18.239 ], 00:08:18.239 "mp_policy": "active_passive" 00:08:18.239 } 00:08:18.239 } 00:08:18.239 ] 00:08:18.239 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=152491 00:08:18.239 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.239 12:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.239 Running I/O for 10 seconds... 00:08:19.620 Latency(us) 00:08:19.620 [2024-12-13T11:13:47.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.620 Nvme0n1 : 1.00 23570.00 92.07 0.00 0.00 0.00 0.00 0.00 00:08:19.620 [2024-12-13T11:13:47.320Z] =================================================================================================================== 00:08:19.620 [2024-12-13T11:13:47.320Z] Total : 23570.00 92.07 0.00 0.00 0.00 0.00 0.00 00:08:19.620 00:08:20.190 12:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:20.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.450 Nvme0n1 : 2.00 23789.50 92.93 0.00 0.00 0.00 0.00 0.00 00:08:20.450 [2024-12-13T11:13:48.150Z] =================================================================================================================== 00:08:20.450 [2024-12-13T11:13:48.150Z] Total : 23789.50 92.93 0.00 0.00 0.00 0.00 0.00 00:08:20.450 00:08:20.450 true 00:08:20.450 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:20.450 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.710 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.710 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.710 12:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 152491 00:08:21.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.279 Nvme0n1 : 3.00 23864.33 93.22 0.00 0.00 0.00 0.00 0.00 00:08:21.279 [2024-12-13T11:13:48.979Z] =================================================================================================================== 00:08:21.279 [2024-12-13T11:13:48.979Z] Total : 23864.33 93.22 0.00 0.00 0.00 0.00 0.00 00:08:21.279 00:08:22.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.660 Nvme0n1 : 4.00 23923.00 93.45 0.00 0.00 0.00 0.00 0.00 00:08:22.660 [2024-12-13T11:13:50.360Z] =================================================================================================================== 00:08:22.660 [2024-12-13T11:13:50.360Z] Total : 23923.00 93.45 0.00 0.00 0.00 0.00 0.00 00:08:22.660 00:08:23.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.598 Nvme0n1 : 5.00 23891.60 93.33 0.00 0.00 0.00 0.00 0.00 00:08:23.598 [2024-12-13T11:13:51.298Z] =================================================================================================================== 00:08:23.598 [2024-12-13T11:13:51.298Z] Total : 23891.60 93.33 0.00 0.00 0.00 0.00 0.00 00:08:23.598 00:08:24.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.539 Nvme0n1 : 6.00 23945.00 93.54 0.00 0.00 0.00 0.00 0.00 00:08:24.539 [2024-12-13T11:13:52.239Z] =================================================================================================================== 00:08:24.539 [2024-12-13T11:13:52.239Z] Total : 23945.00 93.54 0.00 0.00 0.00 0.00 0.00 00:08:24.539 00:08:25.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.479 Nvme0n1 : 7.00 23989.86 93.71 0.00 0.00 0.00 0.00 0.00 00:08:25.479 [2024-12-13T11:13:53.179Z] =================================================================================================================== 00:08:25.479 [2024-12-13T11:13:53.179Z] Total : 23989.86 93.71 0.00 0.00 0.00 0.00 0.00 00:08:25.479 00:08:26.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.416 Nvme0n1 : 8.00 24024.75 93.85 0.00 0.00 0.00 0.00 0.00 00:08:26.416 [2024-12-13T11:13:54.116Z] =================================================================================================================== 00:08:26.416 [2024-12-13T11:13:54.116Z] Total : 24024.75 93.85 0.00 0.00 0.00 0.00 0.00 00:08:26.416 00:08:27.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.361 Nvme0n1 : 9.00 24052.67 93.96 0.00 0.00 0.00 0.00 0.00 00:08:27.361 [2024-12-13T11:13:55.061Z] =================================================================================================================== 00:08:27.361 [2024-12-13T11:13:55.061Z] Total : 24052.67 93.96 0.00 0.00 0.00 0.00 0.00 00:08:27.361 00:08:28.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.315 Nvme0n1 : 10.00 24074.10 94.04 0.00 0.00 0.00 0.00 0.00 00:08:28.315 [2024-12-13T11:13:56.015Z] =================================================================================================================== 00:08:28.315 [2024-12-13T11:13:56.015Z] Total : 24074.10 94.04 0.00 0.00 0.00 0.00 0.00 00:08:28.315 00:08:28.315 00:08:28.315 Latency(us) 00:08:28.315 [2024-12-13T11:13:56.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.315 Nvme0n1 : 10.00 24074.14 94.04 0.00 0.00 5313.75 3151.97 14105.84 00:08:28.315 [2024-12-13T11:13:56.015Z] =================================================================================================================== 00:08:28.316 [2024-12-13T11:13:56.016Z] Total : 24074.14 94.04 0.00 0.00 5313.75 3151.97 14105.84 00:08:28.316 { 00:08:28.316 "results": [ 00:08:28.316 { 00:08:28.316 "job": "Nvme0n1", 00:08:28.316 "core_mask": "0x2", 00:08:28.316 "workload": "randwrite", 00:08:28.316 "status": "finished", 00:08:28.316 "queue_depth": 128, 00:08:28.316 "io_size": 4096, 00:08:28.316 "runtime": 10.002642, 00:08:28.316 "iops": 24074.139612314426, 00:08:28.316 "mibps": 94.03960786060323, 00:08:28.316 "io_failed": 0, 00:08:28.316 "io_timeout": 0, 00:08:28.316 "avg_latency_us": 5313.748400462338, 00:08:28.316 "min_latency_us": 3151.9695238095237, 00:08:28.316 "max_latency_us": 14105.843809523809 00:08:28.316 } 00:08:28.316 ], 00:08:28.316 "core_count": 1 00:08:28.316 } 00:08:28.316 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152268 00:08:28.316 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152268 ']' 00:08:28.316 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152268 00:08:28.316 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:28.316 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.316 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152268 00:08:28.575 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.575 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.575 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152268' 00:08:28.575 killing process with pid 152268 00:08:28.575 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152268 00:08:28.575 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.575 00:08:28.575 Latency(us) 00:08:28.575 [2024-12-13T11:13:56.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.575 [2024-12-13T11:13:56.275Z] =================================================================================================================== 00:08:28.575 [2024-12-13T11:13:56.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.575 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152268 00:08:28.575 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.835 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.094 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:29.094 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:29.095 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:29.095 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:29.095 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.354 [2024-12-13 12:13:56.932463] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:29.354 12:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:29.614 request: 00:08:29.614 { 00:08:29.614 "uuid": "57ba39ac-1e14-43f4-9a87-402d91296f70", 00:08:29.614 "method": "bdev_lvol_get_lvstores", 00:08:29.614 "req_id": 1 00:08:29.614 } 00:08:29.614 Got JSON-RPC error response 00:08:29.614 response: 00:08:29.614 { 00:08:29.614 "code": -19, 00:08:29.614 "message": "No such device" 00:08:29.614 } 00:08:29.614 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:29.614 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.614 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.614 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.614 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.874 aio_bdev 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3888688b-91c8-485c-b415-d01de7acaeda 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3888688b-91c8-485c-b415-d01de7acaeda 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.874 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3888688b-91c8-485c-b415-d01de7acaeda -t 2000 00:08:30.134 [ 00:08:30.134 { 00:08:30.134 "name": "3888688b-91c8-485c-b415-d01de7acaeda", 00:08:30.134 "aliases": [ 00:08:30.134 "lvs/lvol" 00:08:30.134 ], 00:08:30.134 "product_name": "Logical Volume", 00:08:30.134 "block_size": 4096, 00:08:30.134 "num_blocks": 38912, 00:08:30.134 "uuid": "3888688b-91c8-485c-b415-d01de7acaeda", 00:08:30.134 "assigned_rate_limits": { 00:08:30.134 "rw_ios_per_sec": 0, 00:08:30.134 "rw_mbytes_per_sec": 0, 00:08:30.134 "r_mbytes_per_sec": 0, 00:08:30.134 "w_mbytes_per_sec": 0 00:08:30.134 }, 00:08:30.134 "claimed": false, 00:08:30.134 "zoned": false, 00:08:30.134 "supported_io_types": { 00:08:30.134 "read": true, 00:08:30.134 "write": true, 00:08:30.134 "unmap": true, 00:08:30.134 "flush": false, 00:08:30.134 "reset": true, 00:08:30.134 "nvme_admin": false, 00:08:30.134 "nvme_io": false, 00:08:30.134 "nvme_io_md": false, 00:08:30.134 "write_zeroes": true, 00:08:30.134 "zcopy": false, 00:08:30.134 "get_zone_info": false, 00:08:30.134 "zone_management": false, 00:08:30.134 "zone_append": false, 00:08:30.134 "compare": false, 00:08:30.134 "compare_and_write": false, 00:08:30.134 "abort": false, 00:08:30.134 "seek_hole": true, 00:08:30.134 "seek_data": true, 00:08:30.134 "copy": false, 00:08:30.134 "nvme_iov_md": false 00:08:30.134 }, 00:08:30.134 "driver_specific": { 00:08:30.134 "lvol": { 00:08:30.134 "lvol_store_uuid": "57ba39ac-1e14-43f4-9a87-402d91296f70", 00:08:30.134 "base_bdev": "aio_bdev", 00:08:30.134 "thin_provision": false, 00:08:30.134 "num_allocated_clusters": 38, 00:08:30.134 "snapshot": false, 00:08:30.134 "clone": false, 00:08:30.134 "esnap_clone": false 00:08:30.134 } 00:08:30.134 } 00:08:30.134 } 00:08:30.134 ] 00:08:30.134 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:30.134 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:30.134 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:30.394 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:30.394 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:30.394 12:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:30.394 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:30.394 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3888688b-91c8-485c-b415-d01de7acaeda 00:08:30.655 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57ba39ac-1e14-43f4-9a87-402d91296f70 00:08:30.914 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.174 00:08:31.174 real 0m15.609s 00:08:31.174 user 0m15.162s 00:08:31.174 sys 0m1.508s 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 ************************************ 00:08:31.174 END TEST lvs_grow_clean 00:08:31.174 ************************************ 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 ************************************ 00:08:31.174 START TEST lvs_grow_dirty 00:08:31.174 ************************************ 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:31.174 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.175 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.434 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.434 12:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.694 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=580a8547-160e-42a3-8784-8f00baa2531a 00:08:31.694 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:31.694 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.694 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.694 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.694 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 580a8547-160e-42a3-8784-8f00baa2531a lvol 150 00:08:31.954 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:31.954 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.954 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.214 [2024-12-13 12:13:59.689241] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.214 [2024-12-13 12:13:59.689292] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.214 true 00:08:32.214 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:32.214 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.214 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.214 12:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.474 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:32.735 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:32.735 [2024-12-13 12:14:00.419396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=155013 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 155013 /var/tmp/bdevperf.sock 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 155013 ']' 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.995 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.995 [2024-12-13 12:14:00.681281] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:32.995 [2024-12-13 12:14:00.681325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155013 ] 00:08:33.255 [2024-12-13 12:14:00.753350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.255 [2024-12-13 12:14:00.775077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.255 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.255 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:33.255 12:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.514 Nvme0n1 00:08:33.514 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:33.774 [ 00:08:33.774 { 00:08:33.774 "name": "Nvme0n1", 00:08:33.774 "aliases": [ 00:08:33.774 "d4acaf7e-ddc1-4bd3-957b-c5445b8e310b" 00:08:33.774 ], 00:08:33.774 "product_name": "NVMe disk", 00:08:33.774 "block_size": 4096, 00:08:33.774 "num_blocks": 38912, 00:08:33.774 "uuid": "d4acaf7e-ddc1-4bd3-957b-c5445b8e310b", 00:08:33.774 "numa_id": 1, 00:08:33.774 "assigned_rate_limits": { 00:08:33.774 "rw_ios_per_sec": 0, 00:08:33.774 "rw_mbytes_per_sec": 0, 00:08:33.774 "r_mbytes_per_sec": 0, 00:08:33.774 "w_mbytes_per_sec": 0 00:08:33.774 }, 00:08:33.774 "claimed": false, 00:08:33.774 "zoned": false, 00:08:33.774 "supported_io_types": { 00:08:33.774 "read": true, 00:08:33.774 "write": true, 00:08:33.774 "unmap": true, 00:08:33.774 "flush": true, 00:08:33.774 "reset": true, 00:08:33.774 "nvme_admin": true, 00:08:33.774 "nvme_io": true, 00:08:33.774 "nvme_io_md": false, 00:08:33.774 "write_zeroes": true, 00:08:33.774 "zcopy": false, 00:08:33.774 "get_zone_info": false, 00:08:33.774 "zone_management": false, 00:08:33.774 "zone_append": false, 00:08:33.774 "compare": true, 00:08:33.774 "compare_and_write": true, 00:08:33.774 "abort": true, 00:08:33.774 "seek_hole": false, 00:08:33.774 "seek_data": false, 00:08:33.774 "copy": true, 00:08:33.774 "nvme_iov_md": false 00:08:33.774 }, 00:08:33.774 "memory_domains": [ 00:08:33.774 { 00:08:33.774 "dma_device_id": "system", 00:08:33.774 "dma_device_type": 1 00:08:33.774 } 00:08:33.774 ], 00:08:33.774 "driver_specific": { 00:08:33.774 "nvme": [ 00:08:33.774 { 00:08:33.774 "trid": { 00:08:33.774 "trtype": "TCP", 00:08:33.774 "adrfam": "IPv4", 00:08:33.774 "traddr": "10.0.0.2", 00:08:33.774 "trsvcid": "4420", 00:08:33.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:33.774 }, 00:08:33.774 "ctrlr_data": { 00:08:33.775 "cntlid": 1, 00:08:33.775 "vendor_id": "0x8086", 00:08:33.775 "model_number": "SPDK bdev Controller", 00:08:33.775 "serial_number": "SPDK0", 00:08:33.775 "firmware_revision": "25.01", 00:08:33.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.775 "oacs": { 00:08:33.775 "security": 0, 00:08:33.775 "format": 0, 00:08:33.775 "firmware": 0, 00:08:33.775 "ns_manage": 0 00:08:33.775 }, 00:08:33.775 "multi_ctrlr": true, 00:08:33.775 "ana_reporting": false 00:08:33.775 }, 00:08:33.775 "vs": { 00:08:33.775 "nvme_version": "1.3" 00:08:33.775 }, 00:08:33.775 "ns_data": { 00:08:33.775 "id": 1, 00:08:33.775 "can_share": true 00:08:33.775 } 00:08:33.775 } 00:08:33.775 ], 00:08:33.775 "mp_policy": "active_passive" 00:08:33.775 } 00:08:33.775 } 00:08:33.775 ] 00:08:33.775 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=155039 00:08:33.775 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:33.775 12:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.775 Running I/O for 10 seconds... 00:08:35.158 Latency(us) 00:08:35.158 [2024-12-13T11:14:02.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.158 Nvme0n1 : 1.00 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:08:35.159 [2024-12-13T11:14:02.859Z] =================================================================================================================== 00:08:35.159 [2024-12-13T11:14:02.859Z] Total : 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:08:35.159 00:08:35.729 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:35.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.989 Nvme0n1 : 2.00 23848.50 93.16 0.00 0.00 0.00 0.00 0.00 00:08:35.989 [2024-12-13T11:14:03.689Z] =================================================================================================================== 00:08:35.989 [2024-12-13T11:14:03.689Z] Total : 23848.50 93.16 0.00 0.00 0.00 0.00 0.00 00:08:35.989 00:08:35.989 true 00:08:35.989 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:35.989 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.249 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.250 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.250 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 155039 00:08:36.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.820 Nvme0n1 : 3.00 23892.33 93.33 0.00 0.00 0.00 0.00 0.00 00:08:36.820 [2024-12-13T11:14:04.520Z] =================================================================================================================== 00:08:36.820 [2024-12-13T11:14:04.520Z] Total : 23892.33 93.33 0.00 0.00 0.00 0.00 0.00 00:08:36.820 00:08:38.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.202 Nvme0n1 : 4.00 23932.75 93.49 0.00 0.00 0.00 0.00 0.00 00:08:38.202 [2024-12-13T11:14:05.902Z] =================================================================================================================== 00:08:38.202 [2024-12-13T11:14:05.902Z] Total : 23932.75 93.49 0.00 0.00 0.00 0.00 0.00 00:08:38.202 00:08:38.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.772 Nvme0n1 : 5.00 23985.40 93.69 0.00 0.00 0.00 0.00 0.00 00:08:38.772 [2024-12-13T11:14:06.472Z] =================================================================================================================== 00:08:38.772 [2024-12-13T11:14:06.472Z] Total : 23985.40 93.69 0.00 0.00 0.00 0.00 0.00 00:08:38.772 00:08:40.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.154 Nvme0n1 : 6.00 24011.67 93.80 0.00 0.00 0.00 0.00 0.00 00:08:40.154 [2024-12-13T11:14:07.854Z] =================================================================================================================== 00:08:40.154 [2024-12-13T11:14:07.854Z] Total : 24011.67 93.80 0.00 0.00 0.00 0.00 0.00 00:08:40.154 00:08:41.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.095 Nvme0n1 : 7.00 24039.00 93.90 0.00 0.00 0.00 0.00 0.00 00:08:41.095 [2024-12-13T11:14:08.795Z] =================================================================================================================== 00:08:41.095 [2024-12-13T11:14:08.795Z] Total : 24039.00 93.90 0.00 0.00 0.00 0.00 0.00 00:08:41.095 00:08:42.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.034 Nvme0n1 : 8.00 24067.75 94.01 0.00 0.00 0.00 0.00 0.00 00:08:42.034 [2024-12-13T11:14:09.734Z] =================================================================================================================== 00:08:42.034 [2024-12-13T11:14:09.734Z] Total : 24067.75 94.01 0.00 0.00 0.00 0.00 0.00 00:08:42.034 00:08:42.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.973 Nvme0n1 : 9.00 24051.67 93.95 0.00 0.00 0.00 0.00 0.00 00:08:42.973 [2024-12-13T11:14:10.673Z] =================================================================================================================== 00:08:42.973 [2024-12-13T11:14:10.673Z] Total : 24051.67 93.95 0.00 0.00 0.00 0.00 0.00 00:08:42.973 00:08:43.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.914 Nvme0n1 : 10.00 24055.60 93.97 0.00 0.00 0.00 0.00 0.00 00:08:43.914 [2024-12-13T11:14:11.614Z] =================================================================================================================== 00:08:43.914 [2024-12-13T11:14:11.614Z] Total : 24055.60 93.97 0.00 0.00 0.00 0.00 0.00 00:08:43.914 00:08:43.914 00:08:43.914 Latency(us) 00:08:43.914 [2024-12-13T11:14:11.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.914 Nvme0n1 : 10.00 24057.22 93.97 0.00 0.00 5317.72 3167.57 11671.65 00:08:43.914 [2024-12-13T11:14:11.614Z] =================================================================================================================== 00:08:43.914 [2024-12-13T11:14:11.614Z] Total : 24057.22 93.97 0.00 0.00 5317.72 3167.57 11671.65 00:08:43.914 { 00:08:43.914 "results": [ 00:08:43.914 { 00:08:43.915 "job": "Nvme0n1", 00:08:43.915 "core_mask": "0x2", 00:08:43.915 "workload": "randwrite", 00:08:43.915 "status": "finished", 00:08:43.915 "queue_depth": 128, 00:08:43.915 "io_size": 4096, 00:08:43.915 "runtime": 10.004646, 00:08:43.915 "iops": 24057.22301418761, 00:08:43.915 "mibps": 93.97352739917035, 00:08:43.915 "io_failed": 0, 00:08:43.915 "io_timeout": 0, 00:08:43.915 "avg_latency_us": 5317.718530885389, 00:08:43.915 "min_latency_us": 3167.5733333333333, 00:08:43.915 "max_latency_us": 11671.649523809523 00:08:43.915 } 00:08:43.915 ], 00:08:43.915 "core_count": 1 00:08:43.915 } 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 155013 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 155013 ']' 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 155013 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155013 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155013' 00:08:43.915 killing process with pid 155013 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 155013 00:08:43.915 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.915 00:08:43.915 Latency(us) 00:08:43.915 [2024-12-13T11:14:11.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.915 [2024-12-13T11:14:11.615Z] =================================================================================================================== 00:08:43.915 [2024-12-13T11:14:11.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.915 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 155013 00:08:44.174 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.434 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.434 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:44.434 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 151929 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 151929 00:08:44.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 151929 Killed "${NVMF_APP[@]}" "$@" 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=156864 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 156864 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 156864 ']' 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.693 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.693 [2024-12-13 12:14:12.359231] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:44.693 [2024-12-13 12:14:12.359275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.953 [2024-12-13 12:14:12.434740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.953 [2024-12-13 12:14:12.456257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.953 [2024-12-13 12:14:12.456293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.953 [2024-12-13 12:14:12.456300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.953 [2024-12-13 12:14:12.456306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.953 [2024-12-13 12:14:12.456311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.953 [2024-12-13 12:14:12.456800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.953 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.212 [2024-12-13 12:14:12.758491] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.213 [2024-12-13 12:14:12.758577] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.213 [2024-12-13 12:14:12.758603] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.213 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.472 12:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d4acaf7e-ddc1-4bd3-957b-c5445b8e310b -t 2000 00:08:45.472 [ 00:08:45.472 { 00:08:45.472 "name": "d4acaf7e-ddc1-4bd3-957b-c5445b8e310b", 00:08:45.472 "aliases": [ 00:08:45.472 "lvs/lvol" 00:08:45.472 ], 00:08:45.472 "product_name": "Logical Volume", 00:08:45.472 "block_size": 4096, 00:08:45.472 "num_blocks": 38912, 00:08:45.472 "uuid": "d4acaf7e-ddc1-4bd3-957b-c5445b8e310b", 00:08:45.472 "assigned_rate_limits": { 00:08:45.472 "rw_ios_per_sec": 0, 00:08:45.472 "rw_mbytes_per_sec": 0, 00:08:45.472 "r_mbytes_per_sec": 0, 00:08:45.472 "w_mbytes_per_sec": 0 00:08:45.472 }, 00:08:45.472 "claimed": false, 00:08:45.472 "zoned": false, 00:08:45.472 "supported_io_types": { 00:08:45.472 "read": true, 00:08:45.472 "write": true, 00:08:45.472 "unmap": true, 00:08:45.472 "flush": false, 00:08:45.472 "reset": true, 00:08:45.472 "nvme_admin": false, 00:08:45.472 "nvme_io": false, 00:08:45.472 "nvme_io_md": false, 00:08:45.472 "write_zeroes": true, 00:08:45.472 "zcopy": false, 00:08:45.472 "get_zone_info": false, 00:08:45.472 "zone_management": false, 00:08:45.472 "zone_append": false, 00:08:45.472 "compare": false, 00:08:45.472 "compare_and_write": false, 00:08:45.472 "abort": false, 00:08:45.472 "seek_hole": true, 00:08:45.472 "seek_data": true, 00:08:45.472 "copy": false, 00:08:45.472 "nvme_iov_md": false 00:08:45.472 }, 00:08:45.472 "driver_specific": { 00:08:45.472 "lvol": { 00:08:45.472 "lvol_store_uuid": "580a8547-160e-42a3-8784-8f00baa2531a", 00:08:45.472 "base_bdev": "aio_bdev", 00:08:45.472 "thin_provision": false, 00:08:45.472 "num_allocated_clusters": 38, 00:08:45.472 "snapshot": false, 00:08:45.472 "clone": false, 00:08:45.472 "esnap_clone": false 00:08:45.472 } 00:08:45.472 } 00:08:45.472 } 00:08:45.472 ] 00:08:45.472 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:45.472 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:45.472 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:45.732 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:45.732 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:45.732 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:45.992 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:45.992 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.992 [2024-12-13 12:14:13.691233] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.251 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:46.252 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:46.252 request: 00:08:46.252 { 00:08:46.252 "uuid": "580a8547-160e-42a3-8784-8f00baa2531a", 00:08:46.252 "method": "bdev_lvol_get_lvstores", 00:08:46.252 "req_id": 1 00:08:46.252 } 00:08:46.252 Got JSON-RPC error response 00:08:46.252 response: 00:08:46.252 { 00:08:46.252 "code": -19, 00:08:46.252 "message": "No such device" 00:08:46.252 } 00:08:46.252 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:46.252 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.252 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:46.252 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.252 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.511 aio_bdev 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.511 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.771 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d4acaf7e-ddc1-4bd3-957b-c5445b8e310b -t 2000 00:08:47.030 [ 00:08:47.030 { 00:08:47.030 "name": "d4acaf7e-ddc1-4bd3-957b-c5445b8e310b", 00:08:47.030 "aliases": [ 00:08:47.030 "lvs/lvol" 00:08:47.030 ], 00:08:47.030 "product_name": "Logical Volume", 00:08:47.030 "block_size": 4096, 00:08:47.030 "num_blocks": 38912, 00:08:47.030 "uuid": "d4acaf7e-ddc1-4bd3-957b-c5445b8e310b", 00:08:47.030 "assigned_rate_limits": { 00:08:47.030 "rw_ios_per_sec": 0, 00:08:47.030 "rw_mbytes_per_sec": 0, 00:08:47.030 "r_mbytes_per_sec": 0, 00:08:47.030 "w_mbytes_per_sec": 0 00:08:47.030 }, 00:08:47.030 "claimed": false, 00:08:47.030 "zoned": false, 00:08:47.030 "supported_io_types": { 00:08:47.030 "read": true, 00:08:47.030 "write": true, 00:08:47.030 "unmap": true, 00:08:47.030 "flush": false, 00:08:47.030 "reset": true, 00:08:47.030 "nvme_admin": false, 00:08:47.030 "nvme_io": false, 00:08:47.030 "nvme_io_md": false, 00:08:47.030 "write_zeroes": true, 00:08:47.030 "zcopy": false, 00:08:47.030 "get_zone_info": false, 00:08:47.030 "zone_management": false, 00:08:47.030 "zone_append": false, 00:08:47.030 "compare": false, 00:08:47.030 "compare_and_write": false, 00:08:47.030 "abort": false, 00:08:47.030 "seek_hole": true, 00:08:47.030 "seek_data": true, 00:08:47.030 "copy": false, 00:08:47.030 "nvme_iov_md": false 00:08:47.030 }, 00:08:47.030 "driver_specific": { 00:08:47.030 "lvol": { 00:08:47.030 "lvol_store_uuid": "580a8547-160e-42a3-8784-8f00baa2531a", 00:08:47.030 "base_bdev": "aio_bdev", 00:08:47.030 "thin_provision": false, 00:08:47.030 "num_allocated_clusters": 38, 00:08:47.030 "snapshot": false, 00:08:47.030 "clone": false, 00:08:47.030 "esnap_clone": false 00:08:47.030 } 00:08:47.030 } 00:08:47.030 } 00:08:47.030 ] 00:08:47.030 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:47.030 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:47.030 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.030 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.030 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:47.030 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.291 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.291 12:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d4acaf7e-ddc1-4bd3-957b-c5445b8e310b 00:08:47.550 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 580a8547-160e-42a3-8784-8f00baa2531a 00:08:47.810 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.810 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.810 00:08:47.810 real 0m16.757s 00:08:47.810 user 0m43.775s 00:08:47.810 sys 0m3.497s 00:08:47.810 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.810 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:47.810 ************************************ 00:08:47.810 END TEST lvs_grow_dirty 00:08:47.810 ************************************ 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:48.069 nvmf_trace.0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.069 rmmod nvme_tcp 00:08:48.069 rmmod nvme_fabrics 00:08:48.069 rmmod nvme_keyring 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 156864 ']' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 156864 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 156864 ']' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 156864 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156864 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156864' 00:08:48.069 killing process with pid 156864 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 156864 00:08:48.069 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 156864 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.329 12:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.236 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.236 00:08:50.236 real 0m41.547s 00:08:50.236 user 1m4.430s 00:08:50.236 sys 0m9.950s 00:08:50.236 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.236 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.236 ************************************ 00:08:50.236 END TEST nvmf_lvs_grow 00:08:50.236 ************************************ 00:08:50.496 12:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.496 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.496 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.496 12:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.496 ************************************ 00:08:50.496 START TEST nvmf_bdev_io_wait 00:08:50.496 ************************************ 00:08:50.496 12:14:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.496 * Looking for test storage... 00:08:50.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.496 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.497 --rc genhtml_branch_coverage=1 00:08:50.497 --rc genhtml_function_coverage=1 00:08:50.497 --rc genhtml_legend=1 00:08:50.497 --rc geninfo_all_blocks=1 00:08:50.497 --rc geninfo_unexecuted_blocks=1 00:08:50.497 00:08:50.497 ' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.497 --rc genhtml_branch_coverage=1 00:08:50.497 --rc genhtml_function_coverage=1 00:08:50.497 --rc genhtml_legend=1 00:08:50.497 --rc geninfo_all_blocks=1 00:08:50.497 --rc geninfo_unexecuted_blocks=1 00:08:50.497 00:08:50.497 ' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.497 --rc genhtml_branch_coverage=1 00:08:50.497 --rc genhtml_function_coverage=1 00:08:50.497 --rc genhtml_legend=1 00:08:50.497 --rc geninfo_all_blocks=1 00:08:50.497 --rc geninfo_unexecuted_blocks=1 00:08:50.497 00:08:50.497 ' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.497 --rc genhtml_branch_coverage=1 00:08:50.497 --rc genhtml_function_coverage=1 00:08:50.497 --rc genhtml_legend=1 00:08:50.497 --rc geninfo_all_blocks=1 00:08:50.497 --rc geninfo_unexecuted_blocks=1 00:08:50.497 00:08:50.497 ' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.497 12:14:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.075 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:57.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:57.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:57.076 Found net devices under 0000:af:00.0: cvl_0_0 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:57.076 Found net devices under 0000:af:00.1: cvl_0_1 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.076 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:08:57.076 00:08:57.076 --- 10.0.0.2 ping statistics --- 00:08:57.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.076 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:57.076 00:08:57.076 --- 10.0.0.1 ping statistics --- 00:08:57.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.076 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=161026 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 161026 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 161026 ']' 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.076 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 [2024-12-13 12:14:24.168467] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:57.077 [2024-12-13 12:14:24.168513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.077 [2024-12-13 12:14:24.244155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.077 [2024-12-13 12:14:24.268518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.077 [2024-12-13 12:14:24.268558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.077 [2024-12-13 12:14:24.268568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.077 [2024-12-13 12:14:24.268574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.077 [2024-12-13 12:14:24.268579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.077 [2024-12-13 12:14:24.270057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.077 [2024-12-13 12:14:24.270163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.077 [2024-12-13 12:14:24.270273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.077 [2024-12-13 12:14:24.270273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 [2024-12-13 12:14:24.434029] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 Malloc0 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.077 [2024-12-13 12:14:24.485207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=161052 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=161054 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "name": "Nvme$subsystem", 00:08:57.077 "trtype": "$TEST_TRANSPORT", 00:08:57.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.077 "adrfam": "ipv4", 00:08:57.077 "trsvcid": "$NVMF_PORT", 00:08:57.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.077 "hdgst": ${hdgst:-false}, 00:08:57.077 "ddgst": ${ddgst:-false} 00:08:57.077 }, 00:08:57.077 "method": "bdev_nvme_attach_controller" 00:08:57.077 } 00:08:57.077 EOF 00:08:57.077 )") 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=161056 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "name": "Nvme$subsystem", 00:08:57.077 "trtype": "$TEST_TRANSPORT", 00:08:57.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.077 "adrfam": "ipv4", 00:08:57.077 "trsvcid": "$NVMF_PORT", 00:08:57.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.077 "hdgst": ${hdgst:-false}, 00:08:57.077 "ddgst": ${ddgst:-false} 00:08:57.077 }, 00:08:57.077 "method": "bdev_nvme_attach_controller" 00:08:57.077 } 00:08:57.077 EOF 00:08:57.077 )") 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=161059 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "name": "Nvme$subsystem", 00:08:57.077 "trtype": "$TEST_TRANSPORT", 00:08:57.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.077 "adrfam": "ipv4", 00:08:57.077 "trsvcid": "$NVMF_PORT", 00:08:57.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.077 "hdgst": ${hdgst:-false}, 00:08:57.077 "ddgst": ${ddgst:-false} 00:08:57.077 }, 00:08:57.077 "method": "bdev_nvme_attach_controller" 00:08:57.077 } 00:08:57.077 EOF 00:08:57.077 )") 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.077 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.077 { 00:08:57.077 "params": { 00:08:57.077 "name": "Nvme$subsystem", 00:08:57.078 "trtype": "$TEST_TRANSPORT", 00:08:57.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.078 "adrfam": "ipv4", 00:08:57.078 "trsvcid": "$NVMF_PORT", 00:08:57.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.078 "hdgst": ${hdgst:-false}, 00:08:57.078 "ddgst": ${ddgst:-false} 00:08:57.078 }, 00:08:57.078 "method": "bdev_nvme_attach_controller" 00:08:57.078 } 00:08:57.078 EOF 00:08:57.078 )") 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 161052 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.078 "params": { 00:08:57.078 "name": "Nvme1", 00:08:57.078 "trtype": "tcp", 00:08:57.078 "traddr": "10.0.0.2", 00:08:57.078 "adrfam": "ipv4", 00:08:57.078 "trsvcid": "4420", 00:08:57.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.078 "hdgst": false, 00:08:57.078 "ddgst": false 00:08:57.078 }, 00:08:57.078 "method": "bdev_nvme_attach_controller" 00:08:57.078 }' 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.078 "params": { 00:08:57.078 "name": "Nvme1", 00:08:57.078 "trtype": "tcp", 00:08:57.078 "traddr": "10.0.0.2", 00:08:57.078 "adrfam": "ipv4", 00:08:57.078 "trsvcid": "4420", 00:08:57.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.078 "hdgst": false, 00:08:57.078 "ddgst": false 00:08:57.078 }, 00:08:57.078 "method": "bdev_nvme_attach_controller" 00:08:57.078 }' 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.078 "params": { 00:08:57.078 "name": "Nvme1", 00:08:57.078 "trtype": "tcp", 00:08:57.078 "traddr": "10.0.0.2", 00:08:57.078 "adrfam": "ipv4", 00:08:57.078 "trsvcid": "4420", 00:08:57.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.078 "hdgst": false, 00:08:57.078 "ddgst": false 00:08:57.078 }, 00:08:57.078 "method": "bdev_nvme_attach_controller" 00:08:57.078 }' 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.078 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.078 "params": { 00:08:57.078 "name": "Nvme1", 00:08:57.078 "trtype": "tcp", 00:08:57.078 "traddr": "10.0.0.2", 00:08:57.078 "adrfam": "ipv4", 00:08:57.078 "trsvcid": "4420", 00:08:57.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.078 "hdgst": false, 00:08:57.078 "ddgst": false 00:08:57.078 }, 00:08:57.078 "method": "bdev_nvme_attach_controller" 00:08:57.078 }' 00:08:57.078 [2024-12-13 12:14:24.536482] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:57.078 [2024-12-13 12:14:24.536531] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:57.078 [2024-12-13 12:14:24.537444] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:57.078 [2024-12-13 12:14:24.537490] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:57.078 [2024-12-13 12:14:24.537884] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:57.078 [2024-12-13 12:14:24.537924] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:57.078 [2024-12-13 12:14:24.541271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:57.078 [2024-12-13 12:14:24.541312] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:57.078 [2024-12-13 12:14:24.729302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.078 [2024-12-13 12:14:24.746788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:57.340 [2024-12-13 12:14:24.833613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.340 [2024-12-13 12:14:24.853242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:57.340 [2024-12-13 12:14:24.889144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.340 [2024-12-13 12:14:24.905029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:57.340 [2024-12-13 12:14:24.924453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.340 [2024-12-13 12:14:24.940099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:57.609 Running I/O for 1 seconds... 00:08:57.609 Running I/O for 1 seconds... 00:08:57.609 Running I/O for 1 seconds... 00:08:57.609 Running I/O for 1 seconds... 00:08:58.546 8505.00 IOPS, 33.22 MiB/s 00:08:58.546 Latency(us) 00:08:58.546 [2024-12-13T11:14:26.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.546 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:58.546 Nvme1n1 : 1.02 8506.54 33.23 0.00 0.00 14951.98 6335.15 26713.72 00:08:58.546 [2024-12-13T11:14:26.246Z] =================================================================================================================== 00:08:58.546 [2024-12-13T11:14:26.246Z] Total : 8506.54 33.23 0.00 0.00 14951.98 6335.15 26713.72 00:08:58.546 243736.00 IOPS, 952.09 MiB/s 00:08:58.546 Latency(us) 00:08:58.546 [2024-12-13T11:14:26.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.546 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:58.546 Nvme1n1 : 1.00 243367.02 950.65 0.00 0.00 523.33 221.38 1490.16 00:08:58.546 [2024-12-13T11:14:26.246Z] =================================================================================================================== 00:08:58.546 [2024-12-13T11:14:26.246Z] Total : 243367.02 950.65 0.00 0.00 523.33 221.38 1490.16 00:08:58.546 7585.00 IOPS, 29.63 MiB/s 00:08:58.546 Latency(us) 00:08:58.546 [2024-12-13T11:14:26.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.546 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:58.546 Nvme1n1 : 1.01 7670.33 29.96 0.00 0.00 16634.37 5118.05 28461.35 00:08:58.546 [2024-12-13T11:14:26.246Z] =================================================================================================================== 00:08:58.546 [2024-12-13T11:14:26.246Z] Total : 7670.33 29.96 0.00 0.00 16634.37 5118.05 28461.35 00:08:58.546 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 161054 00:08:58.546 12463.00 IOPS, 48.68 MiB/s 00:08:58.546 Latency(us) 00:08:58.546 [2024-12-13T11:14:26.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.546 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:58.546 Nvme1n1 : 1.00 12540.27 48.99 0.00 0.00 10181.40 3198.78 19848.05 00:08:58.546 [2024-12-13T11:14:26.246Z] =================================================================================================================== 00:08:58.546 [2024-12-13T11:14:26.246Z] Total : 12540.27 48.99 0.00 0.00 10181.40 3198.78 19848.05 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 161056 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 161059 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.807 rmmod nvme_tcp 00:08:58.807 rmmod nvme_fabrics 00:08:58.807 rmmod nvme_keyring 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 161026 ']' 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 161026 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 161026 ']' 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 161026 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161026 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161026' 00:08:58.807 killing process with pid 161026 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 161026 00:08:58.807 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 161026 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.067 12:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.606 00:09:01.606 real 0m10.722s 00:09:01.606 user 0m16.094s 00:09:01.606 sys 0m6.076s 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.606 ************************************ 00:09:01.606 END TEST nvmf_bdev_io_wait 00:09:01.606 ************************************ 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.606 ************************************ 00:09:01.606 START TEST nvmf_queue_depth 00:09:01.606 ************************************ 00:09:01.606 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.606 * Looking for test storage... 00:09:01.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:01.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.607 --rc genhtml_branch_coverage=1 00:09:01.607 --rc genhtml_function_coverage=1 00:09:01.607 --rc genhtml_legend=1 00:09:01.607 --rc geninfo_all_blocks=1 00:09:01.607 --rc geninfo_unexecuted_blocks=1 00:09:01.607 00:09:01.607 ' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:01.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.607 --rc genhtml_branch_coverage=1 00:09:01.607 --rc genhtml_function_coverage=1 00:09:01.607 --rc genhtml_legend=1 00:09:01.607 --rc geninfo_all_blocks=1 00:09:01.607 --rc geninfo_unexecuted_blocks=1 00:09:01.607 00:09:01.607 ' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:01.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.607 --rc genhtml_branch_coverage=1 00:09:01.607 --rc genhtml_function_coverage=1 00:09:01.607 --rc genhtml_legend=1 00:09:01.607 --rc geninfo_all_blocks=1 00:09:01.607 --rc geninfo_unexecuted_blocks=1 00:09:01.607 00:09:01.607 ' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:01.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.607 --rc genhtml_branch_coverage=1 00:09:01.607 --rc genhtml_function_coverage=1 00:09:01.607 --rc genhtml_legend=1 00:09:01.607 --rc geninfo_all_blocks=1 00:09:01.607 --rc geninfo_unexecuted_blocks=1 00:09:01.607 00:09:01.607 ' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.607 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.608 12:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:08.186 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:08.186 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:08.186 Found net devices under 0000:af:00.0: cvl_0_0 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:08.186 Found net devices under 0000:af:00.1: cvl_0_1 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.186 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:09:08.187 00:09:08.187 --- 10.0.0.2 ping statistics --- 00:09:08.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.187 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:08.187 00:09:08.187 --- 10.0.0.1 ping statistics --- 00:09:08.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.187 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=164984 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 164984 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 164984 ']' 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.187 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 [2024-12-13 12:14:34.950855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:08.187 [2024-12-13 12:14:34.950901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.187 [2024-12-13 12:14:35.027306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.187 [2024-12-13 12:14:35.047977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.187 [2024-12-13 12:14:35.048022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.187 [2024-12-13 12:14:35.048031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.187 [2024-12-13 12:14:35.048037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.187 [2024-12-13 12:14:35.048042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.187 [2024-12-13 12:14:35.048517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 [2024-12-13 12:14:35.186047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 Malloc0 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 [2024-12-13 12:14:35.235879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=165007 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 165007 /var/tmp/bdevperf.sock 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165007 ']' 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 [2024-12-13 12:14:35.286076] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:08.187 [2024-12-13 12:14:35.286116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165007 ] 00:09:08.187 [2024-12-13 12:14:35.360172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.187 [2024-12-13 12:14:35.382993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.187 NVMe0n1 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.187 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:08.187 Running I/O for 10 seconds... 00:09:10.062 12276.00 IOPS, 47.95 MiB/s [2024-12-13T11:14:38.699Z] 12262.50 IOPS, 47.90 MiB/s [2024-12-13T11:14:40.079Z] 12288.33 IOPS, 48.00 MiB/s [2024-12-13T11:14:41.017Z] 12361.50 IOPS, 48.29 MiB/s [2024-12-13T11:14:41.957Z] 12427.00 IOPS, 48.54 MiB/s [2024-12-13T11:14:42.894Z] 12447.67 IOPS, 48.62 MiB/s [2024-12-13T11:14:43.831Z] 12491.29 IOPS, 48.79 MiB/s [2024-12-13T11:14:44.770Z] 12524.00 IOPS, 48.92 MiB/s [2024-12-13T11:14:45.709Z] 12527.11 IOPS, 48.93 MiB/s [2024-12-13T11:14:45.969Z] 12576.10 IOPS, 49.13 MiB/s 00:09:18.269 Latency(us) 00:09:18.269 [2024-12-13T11:14:45.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.269 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:18.269 Verification LBA range: start 0x0 length 0x4000 00:09:18.269 NVMe0n1 : 10.06 12602.83 49.23 0.00 0.00 81003.58 19223.89 53677.10 00:09:18.269 [2024-12-13T11:14:45.969Z] =================================================================================================================== 00:09:18.269 [2024-12-13T11:14:45.969Z] Total : 12602.83 49.23 0.00 0.00 81003.58 19223.89 53677.10 00:09:18.269 { 00:09:18.269 "results": [ 00:09:18.269 { 00:09:18.269 "job": "NVMe0n1", 00:09:18.269 "core_mask": "0x1", 00:09:18.269 "workload": "verify", 00:09:18.269 "status": "finished", 00:09:18.269 "verify_range": { 00:09:18.269 "start": 0, 00:09:18.269 "length": 16384 00:09:18.269 }, 00:09:18.269 "queue_depth": 1024, 00:09:18.269 "io_size": 4096, 00:09:18.269 "runtime": 10.060039, 00:09:18.269 "iops": 12602.833845872765, 00:09:18.269 "mibps": 49.22981971044049, 00:09:18.269 "io_failed": 0, 00:09:18.269 "io_timeout": 0, 00:09:18.269 "avg_latency_us": 81003.58150380566, 00:09:18.269 "min_latency_us": 19223.893333333333, 00:09:18.269 "max_latency_us": 53677.10476190476 00:09:18.269 } 00:09:18.269 ], 00:09:18.269 "core_count": 1 00:09:18.269 } 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 165007 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165007 ']' 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165007 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165007 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165007' 00:09:18.269 killing process with pid 165007 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165007 00:09:18.269 Received shutdown signal, test time was about 10.000000 seconds 00:09:18.269 00:09:18.269 Latency(us) 00:09:18.269 [2024-12-13T11:14:45.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.269 [2024-12-13T11:14:45.969Z] =================================================================================================================== 00:09:18.269 [2024-12-13T11:14:45.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165007 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.269 12:14:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.269 rmmod nvme_tcp 00:09:18.528 rmmod nvme_fabrics 00:09:18.528 rmmod nvme_keyring 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 164984 ']' 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 164984 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 164984 ']' 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 164984 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164984 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164984' 00:09:18.528 killing process with pid 164984 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 164984 00:09:18.528 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 164984 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.788 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.696 00:09:20.696 real 0m19.549s 00:09:20.696 user 0m22.940s 00:09:20.696 sys 0m5.900s 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.696 ************************************ 00:09:20.696 END TEST nvmf_queue_depth 00:09:20.696 ************************************ 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.696 ************************************ 00:09:20.696 START TEST nvmf_target_multipath 00:09:20.696 ************************************ 00:09:20.696 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:20.957 * Looking for test storage... 00:09:20.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.957 --rc genhtml_branch_coverage=1 00:09:20.957 --rc genhtml_function_coverage=1 00:09:20.957 --rc genhtml_legend=1 00:09:20.957 --rc geninfo_all_blocks=1 00:09:20.957 --rc geninfo_unexecuted_blocks=1 00:09:20.957 00:09:20.957 ' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.957 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.958 12:14:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:27.535 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:27.535 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:27.535 Found net devices under 0000:af:00.0: cvl_0_0 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:27.535 Found net devices under 0000:af:00.1: cvl_0_1 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.535 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:09:27.535 00:09:27.535 --- 10.0.0.2 ping statistics --- 00:09:27.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.536 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:27.536 00:09:27.536 --- 10.0.0.1 ping statistics --- 00:09:27.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.536 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:27.536 only one NIC for nvmf test 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.536 rmmod nvme_tcp 00:09:27.536 rmmod nvme_fabrics 00:09:27.536 rmmod nvme_keyring 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.536 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.446 00:09:29.446 real 0m8.337s 00:09:29.446 user 0m1.887s 00:09:29.446 sys 0m4.458s 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.446 ************************************ 00:09:29.446 END TEST nvmf_target_multipath 00:09:29.446 ************************************ 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.446 ************************************ 00:09:29.446 START TEST nvmf_zcopy 00:09:29.446 ************************************ 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.446 * Looking for test storage... 00:09:29.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.446 --rc genhtml_branch_coverage=1 00:09:29.446 --rc genhtml_function_coverage=1 00:09:29.446 --rc genhtml_legend=1 00:09:29.446 --rc geninfo_all_blocks=1 00:09:29.446 --rc geninfo_unexecuted_blocks=1 00:09:29.446 00:09:29.446 ' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.446 --rc genhtml_branch_coverage=1 00:09:29.446 --rc genhtml_function_coverage=1 00:09:29.446 --rc genhtml_legend=1 00:09:29.446 --rc geninfo_all_blocks=1 00:09:29.446 --rc geninfo_unexecuted_blocks=1 00:09:29.446 00:09:29.446 ' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.446 --rc genhtml_branch_coverage=1 00:09:29.446 --rc genhtml_function_coverage=1 00:09:29.446 --rc genhtml_legend=1 00:09:29.446 --rc geninfo_all_blocks=1 00:09:29.446 --rc geninfo_unexecuted_blocks=1 00:09:29.446 00:09:29.446 ' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.446 --rc genhtml_branch_coverage=1 00:09:29.446 --rc genhtml_function_coverage=1 00:09:29.446 --rc genhtml_legend=1 00:09:29.446 --rc geninfo_all_blocks=1 00:09:29.446 --rc geninfo_unexecuted_blocks=1 00:09:29.446 00:09:29.446 ' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.446 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.447 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.447 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.447 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:36.023 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:36.023 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:36.023 Found net devices under 0000:af:00.0: cvl_0_0 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:36.023 Found net devices under 0000:af:00.1: cvl_0_1 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:36.023 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:36.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:09:36.024 00:09:36.024 --- 10.0.0.2 ping statistics --- 00:09:36.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.024 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:36.024 00:09:36.024 --- 10.0.0.1 ping statistics --- 00:09:36.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.024 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=173878 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 173878 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 173878 ']' 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.024 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 [2024-12-13 12:15:02.979063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:36.024 [2024-12-13 12:15:02.979105] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.024 [2024-12-13 12:15:03.057357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.024 [2024-12-13 12:15:03.078158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.024 [2024-12-13 12:15:03.078191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.024 [2024-12-13 12:15:03.078199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.024 [2024-12-13 12:15:03.078204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.024 [2024-12-13 12:15:03.078209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.024 [2024-12-13 12:15:03.078738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 [2024-12-13 12:15:03.204896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 [2024-12-13 12:15:03.221059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 malloc0 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.024 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.024 { 00:09:36.024 "params": { 00:09:36.024 "name": "Nvme$subsystem", 00:09:36.024 "trtype": "$TEST_TRANSPORT", 00:09:36.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.024 "adrfam": "ipv4", 00:09:36.024 "trsvcid": "$NVMF_PORT", 00:09:36.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.024 "hdgst": ${hdgst:-false}, 00:09:36.024 "ddgst": ${ddgst:-false} 00:09:36.024 }, 00:09:36.024 "method": "bdev_nvme_attach_controller" 00:09:36.024 } 00:09:36.025 EOF 00:09:36.025 )") 00:09:36.025 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:36.025 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:36.025 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:36.025 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.025 "params": { 00:09:36.025 "name": "Nvme1", 00:09:36.025 "trtype": "tcp", 00:09:36.025 "traddr": "10.0.0.2", 00:09:36.025 "adrfam": "ipv4", 00:09:36.025 "trsvcid": "4420", 00:09:36.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.025 "hdgst": false, 00:09:36.025 "ddgst": false 00:09:36.025 }, 00:09:36.025 "method": "bdev_nvme_attach_controller" 00:09:36.025 }' 00:09:36.025 [2024-12-13 12:15:03.297326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:36.025 [2024-12-13 12:15:03.297375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174101 ] 00:09:36.025 [2024-12-13 12:15:03.372578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.025 [2024-12-13 12:15:03.395656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.025 Running I/O for 10 seconds... 00:09:37.906 8726.00 IOPS, 68.17 MiB/s [2024-12-13T11:15:06.986Z] 8798.50 IOPS, 68.74 MiB/s [2024-12-13T11:15:07.924Z] 8826.67 IOPS, 68.96 MiB/s [2024-12-13T11:15:08.863Z] 8826.00 IOPS, 68.95 MiB/s [2024-12-13T11:15:09.801Z] 8810.60 IOPS, 68.83 MiB/s [2024-12-13T11:15:10.740Z] 8830.33 IOPS, 68.99 MiB/s [2024-12-13T11:15:11.678Z] 8822.43 IOPS, 68.93 MiB/s [2024-12-13T11:15:12.617Z] 8830.12 IOPS, 68.99 MiB/s [2024-12-13T11:15:13.998Z] 8842.11 IOPS, 69.08 MiB/s [2024-12-13T11:15:13.998Z] 8850.00 IOPS, 69.14 MiB/s 00:09:46.298 Latency(us) 00:09:46.298 [2024-12-13T11:15:13.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.298 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:46.298 Verification LBA range: start 0x0 length 0x1000 00:09:46.298 Nvme1n1 : 10.01 8850.42 69.14 0.00 0.00 14420.87 438.86 25090.93 00:09:46.298 [2024-12-13T11:15:13.998Z] =================================================================================================================== 00:09:46.298 [2024-12-13T11:15:13.998Z] Total : 8850.42 69.14 0.00 0.00 14420.87 438.86 25090.93 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=176075 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.298 { 00:09:46.298 "params": { 00:09:46.298 "name": "Nvme$subsystem", 00:09:46.298 "trtype": "$TEST_TRANSPORT", 00:09:46.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.298 "adrfam": "ipv4", 00:09:46.298 "trsvcid": "$NVMF_PORT", 00:09:46.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.298 "hdgst": ${hdgst:-false}, 00:09:46.298 "ddgst": ${ddgst:-false} 00:09:46.298 }, 00:09:46.298 "method": "bdev_nvme_attach_controller" 00:09:46.298 } 00:09:46.298 EOF 00:09:46.298 )") 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:46.298 [2024-12-13 12:15:13.781011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-13 12:15:13.781043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:46.298 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.298 "params": { 00:09:46.298 "name": "Nvme1", 00:09:46.298 "trtype": "tcp", 00:09:46.298 "traddr": "10.0.0.2", 00:09:46.298 "adrfam": "ipv4", 00:09:46.298 "trsvcid": "4420", 00:09:46.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.298 "hdgst": false, 00:09:46.298 "ddgst": false 00:09:46.298 }, 00:09:46.298 "method": "bdev_nvme_attach_controller" 00:09:46.298 }' 00:09:46.298 [2024-12-13 12:15:13.789017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-13 12:15:13.789036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-13 12:15:13.797026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.797048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.805046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.805062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.813067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.813082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.819051] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:46.299 [2024-12-13 12:15:13.819093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176075 ] 00:09:46.299 [2024-12-13 12:15:13.825099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.825115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.833123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.833139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.841144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.841159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.849166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.849180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.857187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.857201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.865208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.865223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.873225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.873240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.881246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.881261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.889268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.889283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.891581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.299 [2024-12-13 12:15:13.897291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.897307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.905313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.905330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.913334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.913351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.913979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.299 [2024-12-13 12:15:13.921355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.921371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.929385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.929410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.937404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.937421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.945423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.945441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.953441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.953459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.961462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.961478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.969487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.969505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.977507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.977524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.985528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.985543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.299 [2024-12-13 12:15:13.993551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.299 [2024-12-13 12:15:13.993569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.001576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.001594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.009596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.009612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.017647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.017664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.025635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.025652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.033659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.033674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.041680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.041695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.049701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.049716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.057722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.057737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.065748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.065765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.073769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.073792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.081795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.081815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.089816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.089832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.097841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.097859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.105862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.105878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 Running I/O for 5 seconds... 00:09:46.559 [2024-12-13 12:15:14.117013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.117032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.125603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.125623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.134589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.134609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.143708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.143727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.153508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.153527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.161925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.161944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.170653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.170672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.179757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.179776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.188769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.188794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.197872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.197891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.206924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.206943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.216135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.216154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.225301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.225320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.233894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.233913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.242966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.242984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.559 [2024-12-13 12:15:14.251904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.559 [2024-12-13 12:15:14.251927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.261019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.261039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.270392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.270411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.278897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.278917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.288002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.288021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.297042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.297064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.306367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.306387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.315518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.315537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.322383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.322402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.333175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.333195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.342312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.342331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.350987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.351010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.360118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.360136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.369186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.369204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.378213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.378231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.387347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.387366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.395702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.395725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.402438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.402457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.413428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.413448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.422778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.422811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.431671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.431690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.441010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.441028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.819 [2024-12-13 12:15:14.450250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.819 [2024-12-13 12:15:14.450269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.459476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.459495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.468019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.468039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.477054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.477073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.485589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.485608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.494944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.494963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.503516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.503535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.820 [2024-12-13 12:15:14.511895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.820 [2024-12-13 12:15:14.511914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.521180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.521200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.530239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.530258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.539222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.539241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.547923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.547942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.557092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.557112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.566322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.566341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.575524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.575544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.584709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.584728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.593952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.593972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.602923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.602942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.611746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.611764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.621275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.621294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.630297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.630316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.639378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.639396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.648450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.648469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.658135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.658154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.667360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.667379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.676550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.676568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.685579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.685598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.694845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.694864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.703890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.703909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.713241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.713261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.080 [2024-12-13 12:15:14.721768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.080 [2024-12-13 12:15:14.721794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-12-13 12:15:14.730139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-12-13 12:15:14.730158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-12-13 12:15:14.739314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-12-13 12:15:14.739344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-12-13 12:15:14.748654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-12-13 12:15:14.748673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-12-13 12:15:14.758342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-12-13 12:15:14.758361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-12-13 12:15:14.767618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-12-13 12:15:14.767637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-12-13 12:15:14.776290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-12-13 12:15:14.776309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-12-13 12:15:14.785624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-12-13 12:15:14.785644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-12-13 12:15:14.794959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-12-13 12:15:14.794979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-12-13 12:15:14.804195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-12-13 12:15:14.804215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-12-13 12:15:14.812801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.812820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.821973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.821992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.830440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.830459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.839574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.839593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.848231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.848250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.857359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.857378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.866093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.866112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.875119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.875138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.883566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.883585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.892778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.892803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.902036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.902055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.910916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.910938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.920033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.920053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.928418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.928437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.937500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.937518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.946642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.946661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.955564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.955582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.964703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.964722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.973854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.973873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.982816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.982835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:14.991893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:14.991912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:15.000868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:15.000886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:15.009867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:15.009886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:15.019371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:15.019390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:15.028926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:15.028946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-12-13 12:15:15.037866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-12-13 12:15:15.037884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.047196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.047215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.055956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.055975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.064512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.064531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.073869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.073888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.082510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.082529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.091556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.091574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.101211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.101234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.110224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.110244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 16870.00 IOPS, 131.80 MiB/s [2024-12-13T11:15:15.301Z] [2024-12-13 12:15:15.119276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.119296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.601 [2024-12-13 12:15:15.128462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.601 [2024-12-13 12:15:15.128482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.137667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.137688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.147135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.147156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.155854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.155873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.164935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.164955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.173799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.173819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.182741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.182760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.191163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.191182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.200336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.200355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.209502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.209522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.218522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.218542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.226987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.227006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.235919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.235938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.244983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.245003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.253849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.253868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.262554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.262573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.269524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.269547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.279799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.279819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.288353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.288373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.602 [2024-12-13 12:15:15.296966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.602 [2024-12-13 12:15:15.296986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.306149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.306169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.315478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.315498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.322459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.322479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.332858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.332878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.342285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.342304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.350672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.350691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.359942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.359961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.368428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.368447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.377572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.377591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.386026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.386045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.394492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.394510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.404199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.404218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.412690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.412710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.421219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.421239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.430171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.430190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.438814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.438844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.448434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.862 [2024-12-13 12:15:15.448453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.862 [2024-12-13 12:15:15.457430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.457449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.465886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.465905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.474346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.474365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.483843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.483863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.493033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.493052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.501600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.501619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.510762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.510787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.519880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.519899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.528475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.528493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.535343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.535362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.546163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.546182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.863 [2024-12-13 12:15:15.554801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.863 [2024-12-13 12:15:15.554836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.563219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.563238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.571758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.571777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.581060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.581079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.589759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.589779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.599052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.599071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.608175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.608194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.617267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.617286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.626442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.626461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.634966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.634985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.643369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.643388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.652462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.652481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.661068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.661086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.670148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.670168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.679129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.679147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.685970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.685988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.696949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.696967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.705700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.705719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.714773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.714798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.723950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.723969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.732593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.732612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.741938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.741957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.750361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.750380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.759555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.759574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.768974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.768993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.777474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.777492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.786759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.786778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.123 [2024-12-13 12:15:15.795906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.123 [2024-12-13 12:15:15.795925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-13 12:15:15.804965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-13 12:15:15.804984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-13 12:15:15.813604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-13 12:15:15.813622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-13 12:15:15.820444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-13 12:15:15.820464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.831547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.831567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.840451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.840470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.848901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.848920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.858016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.858035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.866911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.866934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.875977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.875995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.885045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.885064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.894264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.894282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.903428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.903446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.912614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.912633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.921227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.921245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.930332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.930350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.938752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.938771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.947734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.947752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.956878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.956897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.966481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.966500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.975140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.975159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.383 [2024-12-13 12:15:15.983560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.383 [2024-12-13 12:15:15.983578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:15.992764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:15.992794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.001819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.001838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.011036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.011055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.020271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.020290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.029401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.029419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.038530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.038548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.047612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.047630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.056239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.056258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.065332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.065350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-13 12:15:16.074689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-13 12:15:16.074708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.083943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.083963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.092547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.092567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.101123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.101142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.110301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.110320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 16870.50 IOPS, 131.80 MiB/s [2024-12-13T11:15:16.343Z] [2024-12-13 12:15:16.117013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.117032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.127796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.127815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.136457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.136476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.145574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.145593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.154652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.154671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.164380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.164399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.171327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.171345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.181237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.181255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.189803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.189822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.198864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.198883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.207670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.207690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.216708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.216727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.225778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.225802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.235009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.235028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.243537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.243558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.252611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.252632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.261739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.261759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.268600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.268619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.279551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.279575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.288981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.289000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.297625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.297644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.307076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.307095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.316118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.316137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.325332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.325351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.643 [2024-12-13 12:15:16.334458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.643 [2024-12-13 12:15:16.334478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.343803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.343825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.352561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.352581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.361609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.361628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.371262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.371293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.379794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.379813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.388776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.388803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.397894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.397913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.406409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.406428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.416129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.416148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.425141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.425160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.433567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.433585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.442152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.442171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.450649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.450673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.459739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.459758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.468546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.468565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.477645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.477665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.486516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.486537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.495548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.495567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.504791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.504813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.513823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.513844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.522602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.522622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.531537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.531556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.540527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.540546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.549527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.549546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.558633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.558653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.567804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.903 [2024-12-13 12:15:16.567824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.903 [2024-12-13 12:15:16.574912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-12-13 12:15:16.574932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-12-13 12:15:16.584568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-12-13 12:15:16.584588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.904 [2024-12-13 12:15:16.594392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.904 [2024-12-13 12:15:16.594412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.603507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.603527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.612249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.612268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.621327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.621350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.630400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.630419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.638878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.638898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.648213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.648233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.657413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.657432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.666643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.666663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.675761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.675787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.684420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.684440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.693480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.693500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.702636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.702656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.712341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.712361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.721527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.721547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.730250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.730270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.739437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.739456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.748053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.748072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.757033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.757053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.766281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.766301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.774692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.774711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.783750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.783769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.792882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.792902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.802634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.802654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.812223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.812242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.821358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.821378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.830127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.830147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.837020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.837040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.847613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.847633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.164 [2024-12-13 12:15:16.856441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.164 [2024-12-13 12:15:16.856460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.865473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.865493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.874475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.874494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.883619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.883637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.892860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.892879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.901908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.901928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.910880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.910900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.919895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.919914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.929465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.929484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.938547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.938566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.948214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.948233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.957959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.957978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.966789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.966809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.975267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.424 [2024-12-13 12:15:16.975287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.424 [2024-12-13 12:15:16.984476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:16.984494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:16.993210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:16.993228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.002405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.002423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.011549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.011568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.020613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.020632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.029742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.029761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.038920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.038939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.048053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.048072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.057593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.057611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.066907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.066926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.076118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.076137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.085380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.085399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.094767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.094794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.103228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.103247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 [2024-12-13 12:15:17.111732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.111751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.425 16908.33 IOPS, 132.10 MiB/s [2024-12-13T11:15:17.125Z] [2024-12-13 12:15:17.120977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.425 [2024-12-13 12:15:17.120997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.130166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.130186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.139464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.139483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.148616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.148635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.157737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.157755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.166268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.166286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.175294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.175312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.184296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.184315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.193466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.193485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.203055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.203074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.212309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.212327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.220869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.220888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.229430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.229449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.237789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.237807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.246242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.246260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.255332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.255351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.264231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.264250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.273541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.273560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.283083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.283101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.291630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.291649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.300170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.300193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.309461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.309480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.318466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.318485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.327378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.327397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.336786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.336805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.345859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.345878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.354888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.354908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.363953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.363973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.370842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.370862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.685 [2024-12-13 12:15:17.381599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.685 [2024-12-13 12:15:17.381619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.390287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.390306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.399448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.399467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.408624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.408644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.417635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.417654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.426765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.426791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.435453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.435472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.444152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.444171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.452807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.452826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.461941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.461960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.471237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.471261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.480364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.480384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.945 [2024-12-13 12:15:17.489540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.945 [2024-12-13 12:15:17.489559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.496320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.496339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.507172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.507190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.515819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.515838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.524436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.524455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.533550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.533568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.542543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.542562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.552242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.552261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.561483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.561503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.569877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.569896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.579092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.579111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.588483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.588503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.597693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.597714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.606725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.606744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.615834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.615854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.625147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.625166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.633618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.633636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.946 [2024-12-13 12:15:17.642771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.946 [2024-12-13 12:15:17.642802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.651928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.651948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.660464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.660483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.669341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.669360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.677719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.677738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.686648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.686666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.695529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.695548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.704594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.704613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.713688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.713707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.722739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.722759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.731714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.731732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.740689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.740708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.749229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.749249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.757888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.757907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.767271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.767290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.776544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.776563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.785613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.785632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.794589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.794607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.803936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.803955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.813153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.813176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.821656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.821675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.830851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.830870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.840280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.840299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.849498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.849518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.858813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.858832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.867456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.867477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.205 [2024-12-13 12:15:17.876616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.205 [2024-12-13 12:15:17.876637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.206 [2024-12-13 12:15:17.885248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.206 [2024-12-13 12:15:17.885268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.206 [2024-12-13 12:15:17.893772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.206 [2024-12-13 12:15:17.893799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.206 [2024-12-13 12:15:17.902542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.206 [2024-12-13 12:15:17.902561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.911252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.911272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.920331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.920351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.929567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.929587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.938663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.938683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.947737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.947757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.956701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.956720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.965245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.965264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.974234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.974253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.983351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.983371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:17.991854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:17.991874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.000709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.000729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.009944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.009963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.019079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.019099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.028008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.028027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.036966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.036987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.045553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.045573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.054163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.054183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.062664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.062684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.464 [2024-12-13 12:15:18.071755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.464 [2024-12-13 12:15:18.071774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.080846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.080865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.088037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.088056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.098145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.098165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.107661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.107681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.116817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.116836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 16917.50 IOPS, 132.17 MiB/s [2024-12-13T11:15:18.165Z] [2024-12-13 12:15:18.125621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.125640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.134318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.134337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.143451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.143471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.465 [2024-12-13 12:15:18.160990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.465 [2024-12-13 12:15:18.161010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.170014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.170034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.179749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.179768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.189310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.189329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.197822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.197842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.207013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.207032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.216203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.216222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.224696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.224715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.234007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.234026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.243162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.243182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.249944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.249963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.260982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.261001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.269727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.269746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.278915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.278934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.287875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.287894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.296943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.296962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.305465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.305484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.314005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.314024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.323109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.323133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.332243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.332262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.340889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.340908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.350007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.350026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.359268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.359287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.368517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.368536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.377060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.377079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.386058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.386077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.394466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.394486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.403766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.403791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.412405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.412424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.724 [2024-12-13 12:15:18.421471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.724 [2024-12-13 12:15:18.421490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.429977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.429996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.438473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.438491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.447007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.447026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.456005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.456024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.464911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.464930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.473507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.473526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.482114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.482132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.491178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.491202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.499664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.499682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.508770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.508795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.517178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.517197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.526205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.526224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.535622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.535640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.544569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.544587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.553699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.553718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.562378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.562397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.571383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.571401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.579813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.579831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.588907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.588926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.596327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.596346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.606541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.606560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.615123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.615142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.624405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.624424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.633567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.984 [2024-12-13 12:15:18.633586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.984 [2024-12-13 12:15:18.642516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.985 [2024-12-13 12:15:18.642535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.985 [2024-12-13 12:15:18.649319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.985 [2024-12-13 12:15:18.649337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.985 [2024-12-13 12:15:18.660338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.985 [2024-12-13 12:15:18.660362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.985 [2024-12-13 12:15:18.669028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.985 [2024-12-13 12:15:18.669047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.985 [2024-12-13 12:15:18.678179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.985 [2024-12-13 12:15:18.678198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.686656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.686676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.695083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.695102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.704625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.704644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.713246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.713264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.721656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.721676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.730725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.730744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.738981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.739000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.747467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.747486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.755934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.755953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.764918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.764937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.773343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.773361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.782421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.782440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.792002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.792021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.800492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.800511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.809744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.809764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.818863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.818882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.827950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.827983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.836983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.837002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.846012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.846032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.854676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.854696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.861468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.861486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.872366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.872385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.880997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.881016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.890162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.890181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.898460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.898480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.907489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.907508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.916481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.916500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.924967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.924985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.933888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.933908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.245 [2024-12-13 12:15:18.942470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.245 [2024-12-13 12:15:18.942490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:18.951789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:18.951810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:18.960853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:18.960873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:18.970037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:18.970056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:18.979220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:18.979239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:18.988306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:18.988325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:18.997330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:18.997353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.505 [2024-12-13 12:15:19.006764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.505 [2024-12-13 12:15:19.006791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.015380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.015399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.024568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.024587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.033589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.033608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.042634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.042653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.051900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.051922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.061058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.061077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.070006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.070025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.078511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.078530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.087607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.087626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.096565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.096584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.105112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.105132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.114397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.114416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 16926.20 IOPS, 132.24 MiB/s [2024-12-13T11:15:19.206Z] [2024-12-13 12:15:19.122561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.122581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.128994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.129012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 00:09:51.506 Latency(us) 00:09:51.506 [2024-12-13T11:15:19.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.506 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:51.506 Nvme1n1 : 5.01 16928.45 132.25 0.00 0.00 7553.99 2652.65 15291.73 00:09:51.506 [2024-12-13T11:15:19.206Z] =================================================================================================================== 00:09:51.506 [2024-12-13T11:15:19.206Z] Total : 16928.45 132.25 0.00 0.00 7553.99 2652.65 15291.73 00:09:51.506 [2024-12-13 12:15:19.137012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.137029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.145044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.145060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.153057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.153075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.161082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.161102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.169098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.169115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.177119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.177137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.185142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.185158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.193161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.193177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.506 [2024-12-13 12:15:19.201183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.506 [2024-12-13 12:15:19.201199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.209201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.209219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.217221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.217237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.225241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.225259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.233263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.233280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.241285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.241300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.249309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.249326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.257328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.257344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.265349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.265364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 [2024-12-13 12:15:19.273371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.766 [2024-12-13 12:15:19.273386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (176075) - No such process 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 176075 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.766 delay0 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.766 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:51.766 [2024-12-13 12:15:19.457982] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:58.343 Initializing NVMe Controllers 00:09:58.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:58.343 Initialization complete. Launching workers. 00:09:58.343 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:09:58.343 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:09:58.343 success 202, unsuccessful 194, failed 0 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.343 rmmod nvme_tcp 00:09:58.343 rmmod nvme_fabrics 00:09:58.343 rmmod nvme_keyring 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 173878 ']' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 173878 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 173878 ']' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 173878 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173878 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173878' 00:09:58.343 killing process with pid 173878 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 173878 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 173878 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.343 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.257 00:10:00.257 real 0m31.112s 00:10:00.257 user 0m42.576s 00:10:00.257 sys 0m9.618s 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 ************************************ 00:10:00.257 END TEST nvmf_zcopy 00:10:00.257 ************************************ 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.257 12:15:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.517 ************************************ 00:10:00.517 START TEST nvmf_nmic 00:10:00.517 ************************************ 00:10:00.517 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.517 * Looking for test storage... 00:10:00.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.517 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.518 --rc genhtml_branch_coverage=1 00:10:00.518 --rc genhtml_function_coverage=1 00:10:00.518 --rc genhtml_legend=1 00:10:00.518 --rc geninfo_all_blocks=1 00:10:00.518 --rc geninfo_unexecuted_blocks=1 00:10:00.518 00:10:00.518 ' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.518 --rc genhtml_branch_coverage=1 00:10:00.518 --rc genhtml_function_coverage=1 00:10:00.518 --rc genhtml_legend=1 00:10:00.518 --rc geninfo_all_blocks=1 00:10:00.518 --rc geninfo_unexecuted_blocks=1 00:10:00.518 00:10:00.518 ' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.518 --rc genhtml_branch_coverage=1 00:10:00.518 --rc genhtml_function_coverage=1 00:10:00.518 --rc genhtml_legend=1 00:10:00.518 --rc geninfo_all_blocks=1 00:10:00.518 --rc geninfo_unexecuted_blocks=1 00:10:00.518 00:10:00.518 ' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.518 --rc genhtml_branch_coverage=1 00:10:00.518 --rc genhtml_function_coverage=1 00:10:00.518 --rc genhtml_legend=1 00:10:00.518 --rc geninfo_all_blocks=1 00:10:00.518 --rc geninfo_unexecuted_blocks=1 00:10:00.518 00:10:00.518 ' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.518 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:07.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.095 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:07.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:07.096 Found net devices under 0000:af:00.0: cvl_0_0 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:07.096 Found net devices under 0000:af:00.1: cvl_0_1 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.096 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:10:07.096 00:10:07.096 --- 10.0.0.2 ping statistics --- 00:10:07.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.096 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:10:07.096 00:10:07.096 --- 10.0.0.1 ping statistics --- 00:10:07.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.096 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=181548 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 181548 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 181548 ']' 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.096 [2024-12-13 12:15:34.202809] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:07.096 [2024-12-13 12:15:34.202850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.096 [2024-12-13 12:15:34.278819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.096 [2024-12-13 12:15:34.303094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.096 [2024-12-13 12:15:34.303130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.096 [2024-12-13 12:15:34.303138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.096 [2024-12-13 12:15:34.303143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.096 [2024-12-13 12:15:34.303148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.096 [2024-12-13 12:15:34.305798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.096 [2024-12-13 12:15:34.305825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.096 [2024-12-13 12:15:34.305849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.096 [2024-12-13 12:15:34.305851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.096 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 [2024-12-13 12:15:34.439194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 Malloc0 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 [2024-12-13 12:15:34.498270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:07.097 test case1: single bdev can't be used in multiple subsystems 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 [2024-12-13 12:15:34.526177] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:07.097 [2024-12-13 12:15:34.526196] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:07.097 [2024-12-13 12:15:34.526203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.097 request: 00:10:07.097 { 00:10:07.097 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:07.097 "namespace": { 00:10:07.097 "bdev_name": "Malloc0", 00:10:07.097 "no_auto_visible": false, 00:10:07.097 "hide_metadata": false 00:10:07.097 }, 00:10:07.097 "method": "nvmf_subsystem_add_ns", 00:10:07.097 "req_id": 1 00:10:07.097 } 00:10:07.097 Got JSON-RPC error response 00:10:07.097 response: 00:10:07.097 { 00:10:07.097 "code": -32602, 00:10:07.097 "message": "Invalid parameters" 00:10:07.097 } 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:07.097 Adding namespace failed - expected result. 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:07.097 test case2: host connect to nvmf target in multiple paths 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.097 [2024-12-13 12:15:34.538293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.097 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.034 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:09.413 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.413 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:09.413 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.413 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:09.413 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:11.320 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:11.320 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:11.320 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.320 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:11.320 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.321 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:11.321 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:11.321 [global] 00:10:11.321 thread=1 00:10:11.321 invalidate=1 00:10:11.321 rw=write 00:10:11.321 time_based=1 00:10:11.321 runtime=1 00:10:11.321 ioengine=libaio 00:10:11.321 direct=1 00:10:11.321 bs=4096 00:10:11.321 iodepth=1 00:10:11.321 norandommap=0 00:10:11.321 numjobs=1 00:10:11.321 00:10:11.321 verify_dump=1 00:10:11.321 verify_backlog=512 00:10:11.321 verify_state_save=0 00:10:11.321 do_verify=1 00:10:11.321 verify=crc32c-intel 00:10:11.321 [job0] 00:10:11.321 filename=/dev/nvme0n1 00:10:11.321 Could not set queue depth (nvme0n1) 00:10:11.889 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.889 fio-3.35 00:10:11.889 Starting 1 thread 00:10:13.269 00:10:13.269 job0: (groupid=0, jobs=1): err= 0: pid=182599: Fri Dec 13 12:15:40 2024 00:10:13.269 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:13.269 slat (nsec): min=6899, max=37074, avg=7859.27, stdev=1423.44 00:10:13.269 clat (usec): min=155, max=281, avg=183.70, stdev=15.44 00:10:13.269 lat (usec): min=167, max=288, avg=191.56, stdev=15.52 00:10:13.269 clat percentiles (usec): 00:10:13.269 | 1.00th=[ 167], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:10:13.269 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:10:13.269 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 219], 00:10:13.269 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 277], 00:10:13.269 | 99.99th=[ 281] 00:10:13.270 write: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:10:13.270 slat (usec): min=10, max=26655, avg=20.43, stdev=485.64 00:10:13.270 clat (usec): min=109, max=414, avg=143.57, stdev=23.37 00:10:13.270 lat (usec): min=125, max=26954, avg=164.00, stdev=489.04 00:10:13.270 clat percentiles (usec): 00:10:13.270 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:10:13.270 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 145], 00:10:13.270 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 176], 00:10:13.270 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 269], 99.95th=[ 310], 00:10:13.270 | 99.99th=[ 416] 00:10:13.270 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:13.270 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:13.270 lat (usec) : 250=99.41%, 500=0.59% 00:10:13.270 cpu : usr=6.00%, sys=7.20%, ctx=5573, majf=0, minf=1 00:10:13.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.270 issued rwts: total=2560,3010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.270 00:10:13.270 Run status group 0 (all jobs): 00:10:13.270 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:13.270 WRITE: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.8MiB (12.3MB), run=1001-1001msec 00:10:13.270 00:10:13.270 Disk stats (read/write): 00:10:13.270 nvme0n1: ios=2421/2560, merge=0/0, ticks=1397/340, in_queue=1737, util=98.40% 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.270 rmmod nvme_tcp 00:10:13.270 rmmod nvme_fabrics 00:10:13.270 rmmod nvme_keyring 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 181548 ']' 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 181548 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 181548 ']' 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 181548 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181548 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181548' 00:10:13.270 killing process with pid 181548 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 181548 00:10:13.270 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 181548 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.529 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.070 00:10:16.070 real 0m15.190s 00:10:16.070 user 0m34.303s 00:10:16.070 sys 0m5.589s 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.070 ************************************ 00:10:16.070 END TEST nvmf_nmic 00:10:16.070 ************************************ 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.070 ************************************ 00:10:16.070 START TEST nvmf_fio_target 00:10:16.070 ************************************ 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.070 * Looking for test storage... 00:10:16.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:16.070 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.071 --rc genhtml_branch_coverage=1 00:10:16.071 --rc genhtml_function_coverage=1 00:10:16.071 --rc genhtml_legend=1 00:10:16.071 --rc geninfo_all_blocks=1 00:10:16.071 --rc geninfo_unexecuted_blocks=1 00:10:16.071 00:10:16.071 ' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.071 --rc genhtml_branch_coverage=1 00:10:16.071 --rc genhtml_function_coverage=1 00:10:16.071 --rc genhtml_legend=1 00:10:16.071 --rc geninfo_all_blocks=1 00:10:16.071 --rc geninfo_unexecuted_blocks=1 00:10:16.071 00:10:16.071 ' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.071 --rc genhtml_branch_coverage=1 00:10:16.071 --rc genhtml_function_coverage=1 00:10:16.071 --rc genhtml_legend=1 00:10:16.071 --rc geninfo_all_blocks=1 00:10:16.071 --rc geninfo_unexecuted_blocks=1 00:10:16.071 00:10:16.071 ' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.071 --rc genhtml_branch_coverage=1 00:10:16.071 --rc genhtml_function_coverage=1 00:10:16.071 --rc genhtml_legend=1 00:10:16.071 --rc geninfo_all_blocks=1 00:10:16.071 --rc geninfo_unexecuted_blocks=1 00:10:16.071 00:10:16.071 ' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.071 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:22.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:22.650 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:22.650 Found net devices under 0000:af:00.0: cvl_0_0 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:22.650 Found net devices under 0000:af:00.1: cvl_0_1 00:10:22.650 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:10:22.651 00:10:22.651 --- 10.0.0.2 ping statistics --- 00:10:22.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.651 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:10:22.651 00:10:22.651 --- 10.0.0.1 ping statistics --- 00:10:22.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.651 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=186300 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 186300 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 186300 ']' 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.651 [2024-12-13 12:15:49.452208] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:22.651 [2024-12-13 12:15:49.452249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.651 [2024-12-13 12:15:49.526777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.651 [2024-12-13 12:15:49.550261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.651 [2024-12-13 12:15:49.550300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.651 [2024-12-13 12:15:49.550307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.651 [2024-12-13 12:15:49.550313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.651 [2024-12-13 12:15:49.550321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.651 [2024-12-13 12:15:49.551654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.651 [2024-12-13 12:15:49.551766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.651 [2024-12-13 12:15:49.551872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.651 [2024-12-13 12:15:49.551873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.651 [2024-12-13 12:15:49.861534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.651 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.651 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:22.651 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.651 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:22.651 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.911 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:22.911 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.170 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:23.170 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.429 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.688 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:23.688 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.948 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:23.948 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.948 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:23.948 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:24.207 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.466 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.466 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.725 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.725 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.725 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.984 [2024-12-13 12:15:52.582958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.984 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:25.243 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:25.502 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.444 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:26.444 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:26.444 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.444 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:26.444 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:26.444 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:28.984 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:28.984 [global] 00:10:28.984 thread=1 00:10:28.984 invalidate=1 00:10:28.984 rw=write 00:10:28.984 time_based=1 00:10:28.984 runtime=1 00:10:28.984 ioengine=libaio 00:10:28.984 direct=1 00:10:28.984 bs=4096 00:10:28.984 iodepth=1 00:10:28.984 norandommap=0 00:10:28.984 numjobs=1 00:10:28.984 00:10:28.984 verify_dump=1 00:10:28.984 verify_backlog=512 00:10:28.984 verify_state_save=0 00:10:28.984 do_verify=1 00:10:28.984 verify=crc32c-intel 00:10:28.984 [job0] 00:10:28.984 filename=/dev/nvme0n1 00:10:28.984 [job1] 00:10:28.984 filename=/dev/nvme0n2 00:10:28.984 [job2] 00:10:28.984 filename=/dev/nvme0n3 00:10:28.984 [job3] 00:10:28.984 filename=/dev/nvme0n4 00:10:28.984 Could not set queue depth (nvme0n1) 00:10:28.984 Could not set queue depth (nvme0n2) 00:10:28.984 Could not set queue depth (nvme0n3) 00:10:28.984 Could not set queue depth (nvme0n4) 00:10:28.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.984 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.984 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.984 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.984 fio-3.35 00:10:28.984 Starting 4 threads 00:10:30.370 00:10:30.370 job0: (groupid=0, jobs=1): err= 0: pid=187684: Fri Dec 13 12:15:57 2024 00:10:30.370 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:30.370 slat (nsec): min=6379, max=24598, avg=7265.92, stdev=946.34 00:10:30.370 clat (usec): min=154, max=504, avg=204.36, stdev=34.73 00:10:30.370 lat (usec): min=161, max=528, avg=211.63, stdev=34.88 00:10:30.370 clat percentiles (usec): 00:10:30.370 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:10:30.370 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:10:30.370 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 237], 95.00th=[ 265], 00:10:30.370 | 99.00th=[ 351], 99.50th=[ 379], 99.90th=[ 429], 99.95th=[ 469], 00:10:30.370 | 99.99th=[ 506] 00:10:30.370 write: IOPS=2688, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:10:30.370 slat (nsec): min=9492, max=43364, avg=10637.49, stdev=1442.76 00:10:30.370 clat (usec): min=112, max=418, avg=154.63, stdev=33.18 00:10:30.370 lat (usec): min=122, max=428, avg=165.26, stdev=33.50 00:10:30.370 clat percentiles (usec): 00:10:30.370 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 133], 00:10:30.370 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:10:30.370 | 70.00th=[ 155], 80.00th=[ 165], 90.00th=[ 215], 95.00th=[ 235], 00:10:30.370 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 347], 99.95th=[ 375], 00:10:30.370 | 99.99th=[ 420] 00:10:30.370 bw ( KiB/s): min=12216, max=12216, per=50.03%, avg=12216.00, stdev= 0.00, samples=1 00:10:30.370 iops : min= 3054, max= 3054, avg=3054.00, stdev= 0.00, samples=1 00:10:30.370 lat (usec) : 250=95.58%, 500=4.40%, 750=0.02% 00:10:30.370 cpu : usr=2.70%, sys=4.80%, ctx=5254, majf=0, minf=1 00:10:30.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 issued rwts: total=2560,2691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.371 job1: (groupid=0, jobs=1): err= 0: pid=187702: Fri Dec 13 12:15:57 2024 00:10:30.371 read: IOPS=1996, BW=7984KiB/s (8176kB/s)(8208KiB/1028msec) 00:10:30.371 slat (nsec): min=6206, max=28382, avg=7043.87, stdev=1266.27 00:10:30.371 clat (usec): min=169, max=41980, avg=304.14, stdev=1834.94 00:10:30.371 lat (usec): min=176, max=42002, avg=311.18, stdev=1835.46 00:10:30.371 clat percentiles (usec): 00:10:30.371 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:10:30.371 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 229], 60.00th=[ 241], 00:10:30.371 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 262], 00:10:30.371 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[42206], 99.95th=[42206], 00:10:30.371 | 99.99th=[42206] 00:10:30.371 write: IOPS=2490, BW=9961KiB/s (10.2MB/s)(10.0MiB/1028msec); 0 zone resets 00:10:30.371 slat (nsec): min=8971, max=46559, avg=10241.83, stdev=1725.08 00:10:30.371 clat (usec): min=103, max=295, avg=137.56, stdev=18.41 00:10:30.371 lat (usec): min=113, max=308, avg=147.80, stdev=18.82 00:10:30.371 clat percentiles (usec): 00:10:30.371 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 126], 00:10:30.371 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:10:30.371 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:10:30.371 | 99.00th=[ 233], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 293], 00:10:30.371 | 99.99th=[ 297] 00:10:30.371 bw ( KiB/s): min= 8192, max=12288, per=41.94%, avg=10240.00, stdev=2896.31, samples=2 00:10:30.371 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:30.371 lat (usec) : 250=90.09%, 500=9.82% 00:10:30.371 lat (msec) : 50=0.09% 00:10:30.371 cpu : usr=2.04%, sys=4.19%, ctx=4613, majf=0, minf=2 00:10:30.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 issued rwts: total=2052,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.371 job2: (groupid=0, jobs=1): err= 0: pid=187719: Fri Dec 13 12:15:57 2024 00:10:30.371 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:10:30.371 slat (nsec): min=9904, max=26236, avg=15292.91, stdev=5333.31 00:10:30.371 clat (usec): min=40751, max=42052, avg=41065.87, stdev=306.64 00:10:30.371 lat (usec): min=40766, max=42066, avg=41081.16, stdev=305.77 00:10:30.371 clat percentiles (usec): 00:10:30.371 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:30.371 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:30.371 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:30.371 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.371 | 99.99th=[42206] 00:10:30.371 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:10:30.371 slat (nsec): min=9786, max=43803, avg=12282.10, stdev=3311.06 00:10:30.371 clat (usec): min=144, max=346, avg=212.27, stdev=38.43 00:10:30.371 lat (usec): min=155, max=384, avg=224.55, stdev=38.68 00:10:30.371 clat percentiles (usec): 00:10:30.371 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 178], 00:10:30.371 | 30.00th=[ 190], 40.00th=[ 202], 50.00th=[ 215], 60.00th=[ 223], 00:10:30.371 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 273], 00:10:30.371 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:10:30.371 | 99.99th=[ 347] 00:10:30.371 bw ( KiB/s): min= 4096, max= 4096, per=16.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.371 lat (usec) : 250=85.02%, 500=10.86% 00:10:30.371 lat (msec) : 50=4.12% 00:10:30.371 cpu : usr=0.29%, sys=0.59%, ctx=536, majf=0, minf=1 00:10:30.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.371 job3: (groupid=0, jobs=1): err= 0: pid=187725: Fri Dec 13 12:15:57 2024 00:10:30.371 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:10:30.371 slat (nsec): min=10958, max=27403, avg=23257.71, stdev=3299.90 00:10:30.371 clat (usec): min=40876, max=41066, avg=40967.07, stdev=51.31 00:10:30.371 lat (usec): min=40899, max=41090, avg=40990.33, stdev=51.41 00:10:30.371 clat percentiles (usec): 00:10:30.371 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:30.371 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:30.371 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:30.371 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:30.371 | 99.99th=[41157] 00:10:30.371 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:10:30.371 slat (usec): min=11, max=41192, avg=94.48, stdev=1819.86 00:10:30.371 clat (usec): min=129, max=2973, avg=213.24, stdev=127.06 00:10:30.371 lat (usec): min=145, max=41494, avg=307.72, stdev=1828.21 00:10:30.371 clat percentiles (usec): 00:10:30.371 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 169], 00:10:30.371 | 30.00th=[ 192], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:10:30.371 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 260], 00:10:30.371 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 2966], 99.95th=[ 2966], 00:10:30.371 | 99.99th=[ 2966] 00:10:30.371 bw ( KiB/s): min= 4096, max= 4096, per=16.78%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.371 lat (usec) : 250=89.12%, 500=6.75% 00:10:30.371 lat (msec) : 4=0.19%, 50=3.94% 00:10:30.371 cpu : usr=0.49%, sys=0.98%, ctx=536, majf=0, minf=1 00:10:30.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.371 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.371 00:10:30.371 Run status group 0 (all jobs): 00:10:30.371 READ: bw=17.7MiB/s (18.5MB/s), 82.3KiB/s-9.99MiB/s (84.2kB/s-10.5MB/s), io=18.2MiB (19.1MB), run=1001-1028msec 00:10:30.371 WRITE: bw=23.8MiB/s (25.0MB/s), 2004KiB/s-10.5MiB/s (2052kB/s-11.0MB/s), io=24.5MiB (25.7MB), run=1001-1028msec 00:10:30.371 00:10:30.371 Disk stats (read/write): 00:10:30.371 nvme0n1: ios=2072/2357, merge=0/0, ticks=1282/353, in_queue=1635, util=85.27% 00:10:30.371 nvme0n2: ios=2098/2490, merge=0/0, ticks=507/333, in_queue=840, util=90.21% 00:10:30.371 nvme0n3: ios=39/512, merge=0/0, ticks=1601/105, in_queue=1706, util=93.31% 00:10:30.371 nvme0n4: ios=76/512, merge=0/0, ticks=911/99, in_queue=1010, util=94.20% 00:10:30.371 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:30.371 [global] 00:10:30.371 thread=1 00:10:30.371 invalidate=1 00:10:30.371 rw=randwrite 00:10:30.371 time_based=1 00:10:30.371 runtime=1 00:10:30.371 ioengine=libaio 00:10:30.371 direct=1 00:10:30.371 bs=4096 00:10:30.371 iodepth=1 00:10:30.371 norandommap=0 00:10:30.371 numjobs=1 00:10:30.371 00:10:30.371 verify_dump=1 00:10:30.371 verify_backlog=512 00:10:30.371 verify_state_save=0 00:10:30.371 do_verify=1 00:10:30.371 verify=crc32c-intel 00:10:30.371 [job0] 00:10:30.371 filename=/dev/nvme0n1 00:10:30.371 [job1] 00:10:30.371 filename=/dev/nvme0n2 00:10:30.371 [job2] 00:10:30.371 filename=/dev/nvme0n3 00:10:30.371 [job3] 00:10:30.371 filename=/dev/nvme0n4 00:10:30.371 Could not set queue depth (nvme0n1) 00:10:30.371 Could not set queue depth (nvme0n2) 00:10:30.371 Could not set queue depth (nvme0n3) 00:10:30.371 Could not set queue depth (nvme0n4) 00:10:30.628 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.628 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.628 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.628 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.628 fio-3.35 00:10:30.628 Starting 4 threads 00:10:31.999 00:10:31.999 job0: (groupid=0, jobs=1): err= 0: pid=188182: Fri Dec 13 12:15:59 2024 00:10:31.999 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:10:31.999 slat (nsec): min=9601, max=23798, avg=14127.48, stdev=4134.73 00:10:31.999 clat (usec): min=40867, max=41085, avg=40984.68, stdev=53.44 00:10:31.999 lat (usec): min=40888, max=41109, avg=40998.81, stdev=53.57 00:10:31.999 clat percentiles (usec): 00:10:31.999 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:31.999 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:31.999 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:31.999 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.999 | 99.99th=[41157] 00:10:31.999 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:10:31.999 slat (nsec): min=10339, max=44986, avg=12263.69, stdev=2380.03 00:10:31.999 clat (usec): min=139, max=327, avg=175.93, stdev=17.05 00:10:31.999 lat (usec): min=150, max=338, avg=188.20, stdev=17.51 00:10:31.999 clat percentiles (usec): 00:10:31.999 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:10:31.999 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:10:31.999 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:10:31.999 | 99.00th=[ 241], 99.50th=[ 273], 99.90th=[ 330], 99.95th=[ 330], 00:10:31.999 | 99.99th=[ 330] 00:10:31.999 bw ( KiB/s): min= 4096, max= 4096, per=17.35%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.999 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.999 lat (usec) : 250=94.95%, 500=0.75% 00:10:31.999 lat (msec) : 50=4.30% 00:10:31.999 cpu : usr=0.38%, sys=0.96%, ctx=536, majf=0, minf=1 00:10:31.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.999 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.999 job1: (groupid=0, jobs=1): err= 0: pid=188194: Fri Dec 13 12:15:59 2024 00:10:31.999 read: IOPS=2536, BW=9.91MiB/s (10.4MB/s)(9.92MiB/1001msec) 00:10:31.999 slat (nsec): min=6081, max=26426, avg=7103.48, stdev=1077.83 00:10:31.999 clat (usec): min=153, max=683, avg=229.75, stdev=33.41 00:10:31.999 lat (usec): min=159, max=690, avg=236.85, stdev=33.45 00:10:31.999 clat percentiles (usec): 00:10:31.999 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:10:31.999 | 30.00th=[ 208], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 245], 00:10:31.999 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:10:31.999 | 99.00th=[ 285], 99.50th=[ 330], 99.90th=[ 510], 99.95th=[ 578], 00:10:31.999 | 99.99th=[ 685] 00:10:31.999 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.999 slat (nsec): min=8818, max=55391, avg=9919.70, stdev=1432.79 00:10:31.999 clat (usec): min=99, max=265, avg=141.20, stdev=22.06 00:10:31.999 lat (usec): min=109, max=275, avg=151.12, stdev=22.28 00:10:31.999 clat percentiles (usec): 00:10:31.999 | 1.00th=[ 111], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:10:31.999 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:10:31.999 | 70.00th=[ 145], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 188], 00:10:31.999 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 260], 99.95th=[ 262], 00:10:31.999 | 99.99th=[ 265] 00:10:31.999 bw ( KiB/s): min=12288, max=12288, per=52.05%, avg=12288.00, stdev= 0.00, samples=1 00:10:31.999 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:31.999 lat (usec) : 100=0.02%, 250=85.35%, 500=14.57%, 750=0.06% 00:10:31.999 cpu : usr=2.50%, sys=4.50%, ctx=5100, majf=0, minf=1 00:10:31.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.999 issued rwts: total=2539,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.000 job2: (groupid=0, jobs=1): err= 0: pid=188196: Fri Dec 13 12:15:59 2024 00:10:32.000 read: IOPS=22, BW=90.6KiB/s (92.8kB/s)(92.0KiB/1015msec) 00:10:32.000 slat (nsec): min=7507, max=23360, avg=21702.48, stdev=4150.07 00:10:32.000 clat (usec): min=279, max=41362, avg=39218.74, stdev=8489.07 00:10:32.000 lat (usec): min=288, max=41369, avg=39240.44, stdev=8491.65 00:10:32.000 clat percentiles (usec): 00:10:32.000 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:32.000 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:32.000 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:32.000 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:32.000 | 99.99th=[41157] 00:10:32.000 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:32.000 slat (nsec): min=9394, max=45050, avg=10594.46, stdev=2236.88 00:10:32.000 clat (usec): min=146, max=311, avg=204.84, stdev=38.62 00:10:32.000 lat (usec): min=156, max=356, avg=215.43, stdev=38.75 00:10:32.000 clat percentiles (usec): 00:10:32.000 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:10:32.000 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 202], 00:10:32.000 | 70.00th=[ 212], 80.00th=[ 237], 90.00th=[ 269], 95.00th=[ 289], 00:10:32.000 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 310], 99.95th=[ 310], 00:10:32.000 | 99.99th=[ 310] 00:10:32.000 bw ( KiB/s): min= 4096, max= 4096, per=17.35%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.000 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.000 lat (usec) : 250=79.63%, 500=16.26% 00:10:32.000 lat (msec) : 50=4.11% 00:10:32.000 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=1 00:10:32.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.000 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.000 job3: (groupid=0, jobs=1): err= 0: pid=188198: Fri Dec 13 12:15:59 2024 00:10:32.000 read: IOPS=2473, BW=9894KiB/s (10.1MB/s)(9904KiB/1001msec) 00:10:32.000 slat (nsec): min=6776, max=27099, avg=7734.13, stdev=1135.42 00:10:32.000 clat (usec): min=165, max=40425, avg=225.73, stdev=808.31 00:10:32.000 lat (usec): min=173, max=40432, avg=233.47, stdev=808.31 00:10:32.000 clat percentiles (usec): 00:10:32.000 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:10:32.000 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:10:32.000 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 233], 00:10:32.000 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 251], 99.95th=[ 258], 00:10:32.000 | 99.99th=[40633] 00:10:32.000 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:32.000 slat (nsec): min=10009, max=37708, avg=11210.48, stdev=1300.85 00:10:32.000 clat (usec): min=112, max=3283, avg=147.55, stdev=63.76 00:10:32.000 lat (usec): min=123, max=3294, avg=158.76, stdev=63.81 00:10:32.000 clat percentiles (usec): 00:10:32.000 | 1.00th=[ 119], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 135], 00:10:32.000 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:10:32.000 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 174], 00:10:32.000 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 215], 99.95th=[ 265], 00:10:32.000 | 99.99th=[ 3294] 00:10:32.000 bw ( KiB/s): min=11384, max=11384, per=48.22%, avg=11384.00, stdev= 0.00, samples=1 00:10:32.000 iops : min= 2846, max= 2846, avg=2846.00, stdev= 0.00, samples=1 00:10:32.000 lat (usec) : 250=99.82%, 500=0.14% 00:10:32.000 lat (msec) : 4=0.02%, 50=0.02% 00:10:32.000 cpu : usr=2.80%, sys=4.90%, ctx=5037, majf=0, minf=1 00:10:32.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.000 issued rwts: total=2476,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.000 00:10:32.000 Run status group 0 (all jobs): 00:10:32.000 READ: bw=19.0MiB/s (19.9MB/s), 88.4KiB/s-9.91MiB/s (90.5kB/s-10.4MB/s), io=19.8MiB (20.7MB), run=1001-1041msec 00:10:32.000 WRITE: bw=23.1MiB/s (24.2MB/s), 1967KiB/s-9.99MiB/s (2015kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1041msec 00:10:32.000 00:10:32.000 Disk stats (read/write): 00:10:32.000 nvme0n1: ios=68/512, merge=0/0, ticks=1539/86, in_queue=1625, util=94.29% 00:10:32.000 nvme0n2: ios=2075/2449, merge=0/0, ticks=1390/336, in_queue=1726, util=95.43% 00:10:32.000 nvme0n3: ios=44/512, merge=0/0, ticks=1724/99, in_queue=1823, util=98.54% 00:10:32.000 nvme0n4: ios=2091/2242, merge=0/0, ticks=1542/328, in_queue=1870, util=97.49% 00:10:32.000 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:32.000 [global] 00:10:32.000 thread=1 00:10:32.000 invalidate=1 00:10:32.000 rw=write 00:10:32.000 time_based=1 00:10:32.000 runtime=1 00:10:32.000 ioengine=libaio 00:10:32.000 direct=1 00:10:32.000 bs=4096 00:10:32.000 iodepth=128 00:10:32.000 norandommap=0 00:10:32.000 numjobs=1 00:10:32.000 00:10:32.000 verify_dump=1 00:10:32.000 verify_backlog=512 00:10:32.000 verify_state_save=0 00:10:32.000 do_verify=1 00:10:32.000 verify=crc32c-intel 00:10:32.000 [job0] 00:10:32.000 filename=/dev/nvme0n1 00:10:32.000 [job1] 00:10:32.000 filename=/dev/nvme0n2 00:10:32.000 [job2] 00:10:32.000 filename=/dev/nvme0n3 00:10:32.000 [job3] 00:10:32.000 filename=/dev/nvme0n4 00:10:32.000 Could not set queue depth (nvme0n1) 00:10:32.000 Could not set queue depth (nvme0n2) 00:10:32.000 Could not set queue depth (nvme0n3) 00:10:32.000 Could not set queue depth (nvme0n4) 00:10:32.000 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.000 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.000 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.000 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.000 fio-3.35 00:10:32.000 Starting 4 threads 00:10:33.378 00:10:33.378 job0: (groupid=0, jobs=1): err= 0: pid=188567: Fri Dec 13 12:16:00 2024 00:10:33.378 read: IOPS=5223, BW=20.4MiB/s (21.4MB/s)(21.4MiB/1051msec) 00:10:33.378 slat (nsec): min=1284, max=11165k, avg=98616.47, stdev=687475.75 00:10:33.378 clat (usec): min=3585, max=56053, avg=12645.41, stdev=7755.50 00:10:33.378 lat (usec): min=3591, max=59738, avg=12744.03, stdev=7796.41 00:10:33.378 clat percentiles (usec): 00:10:33.378 | 1.00th=[ 5211], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9372], 00:10:33.379 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[10814], 00:10:33.379 | 70.00th=[11731], 80.00th=[13960], 90.00th=[16581], 95.00th=[23200], 00:10:33.379 | 99.00th=[51119], 99.50th=[51119], 99.90th=[54789], 99.95th=[55837], 00:10:33.379 | 99.99th=[55837] 00:10:33.379 write: IOPS=5358, BW=20.9MiB/s (21.9MB/s)(22.0MiB/1051msec); 0 zone resets 00:10:33.379 slat (usec): min=2, max=8388, avg=76.71, stdev=363.29 00:10:33.379 clat (usec): min=2309, max=56019, avg=11282.19, stdev=5698.61 00:10:33.379 lat (usec): min=2319, max=56025, avg=11358.90, stdev=5733.67 00:10:33.379 clat percentiles (usec): 00:10:33.379 | 1.00th=[ 2868], 5.00th=[ 5014], 10.00th=[ 6783], 20.00th=[ 8979], 00:10:33.379 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:10:33.379 | 70.00th=[10028], 80.00th=[12518], 90.00th=[19268], 95.00th=[21103], 00:10:33.379 | 99.00th=[38011], 99.50th=[43779], 99.90th=[51643], 99.95th=[51643], 00:10:33.379 | 99.99th=[55837] 00:10:33.379 bw ( KiB/s): min=20400, max=24656, per=31.89%, avg=22528.00, stdev=3009.45, samples=2 00:10:33.379 iops : min= 5100, max= 6164, avg=5632.00, stdev=752.36, samples=2 00:10:33.379 lat (msec) : 4=1.76%, 10=48.45%, 20=42.47%, 50=5.99%, 100=1.33% 00:10:33.379 cpu : usr=4.19%, sys=4.95%, ctx=676, majf=0, minf=1 00:10:33.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:33.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.379 issued rwts: total=5490,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.379 job1: (groupid=0, jobs=1): err= 0: pid=188568: Fri Dec 13 12:16:00 2024 00:10:33.379 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1008msec) 00:10:33.379 slat (nsec): min=1352, max=12623k, avg=100266.64, stdev=671059.68 00:10:33.379 clat (usec): min=3523, max=40654, avg=12290.93, stdev=4402.72 00:10:33.379 lat (usec): min=3531, max=40662, avg=12391.20, stdev=4456.28 00:10:33.379 clat percentiles (usec): 00:10:33.379 | 1.00th=[ 4817], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10028], 00:10:33.379 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:10:33.379 | 70.00th=[12125], 80.00th=[13173], 90.00th=[16319], 95.00th=[20579], 00:10:33.379 | 99.00th=[30540], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:10:33.379 | 99.99th=[40633] 00:10:33.379 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:10:33.379 slat (usec): min=2, max=8661, avg=84.25, stdev=408.88 00:10:33.379 clat (usec): min=3213, max=47796, avg=13789.99, stdev=7738.32 00:10:33.379 lat (usec): min=3250, max=47835, avg=13874.24, stdev=7793.07 00:10:33.379 clat percentiles (usec): 00:10:33.379 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 6783], 20.00th=[ 8848], 00:10:33.379 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10290], 60.00th=[10683], 00:10:33.379 | 70.00th=[14877], 80.00th=[20055], 90.00th=[26870], 95.00th=[31589], 00:10:33.379 | 99.00th=[35390], 99.50th=[35914], 99.90th=[47449], 99.95th=[47973], 00:10:33.379 | 99.99th=[47973] 00:10:33.379 bw ( KiB/s): min=18504, max=21616, per=28.40%, avg=20060.00, stdev=2200.52, samples=2 00:10:33.379 iops : min= 4626, max= 5404, avg=5015.00, stdev=550.13, samples=2 00:10:33.379 lat (msec) : 4=0.47%, 10=26.79%, 20=59.70%, 50=13.04% 00:10:33.379 cpu : usr=3.08%, sys=6.16%, ctx=529, majf=0, minf=2 00:10:33.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:33.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.379 issued rwts: total=4631,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.379 job2: (groupid=0, jobs=1): err= 0: pid=188569: Fri Dec 13 12:16:00 2024 00:10:33.379 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:10:33.379 slat (nsec): min=1510, max=14917k, avg=117917.98, stdev=853852.77 00:10:33.379 clat (usec): min=3526, max=46146, avg=15165.16, stdev=7180.31 00:10:33.379 lat (usec): min=3533, max=47749, avg=15283.07, stdev=7247.64 00:10:33.379 clat percentiles (usec): 00:10:33.379 | 1.00th=[ 5538], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10945], 00:10:33.379 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[14353], 00:10:33.379 | 70.00th=[16909], 80.00th=[19006], 90.00th=[24773], 95.00th=[30802], 00:10:33.379 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45876], 99.95th=[46400], 00:10:33.379 | 99.99th=[46400] 00:10:33.379 write: IOPS=4378, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1010msec); 0 zone resets 00:10:33.379 slat (usec): min=2, max=11335, avg=109.05, stdev=547.34 00:10:33.379 clat (usec): min=950, max=89077, avg=14696.48, stdev=11883.26 00:10:33.379 lat (usec): min=962, max=89084, avg=14805.53, stdev=11954.88 00:10:33.379 clat percentiles (usec): 00:10:33.379 | 1.00th=[ 3621], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[10028], 00:10:33.379 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:33.379 | 70.00th=[11600], 80.00th=[19268], 90.00th=[21365], 95.00th=[32375], 00:10:33.379 | 99.00th=[81265], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:10:33.379 | 99.99th=[88605] 00:10:33.379 bw ( KiB/s): min=12288, max=22072, per=24.32%, avg=17180.00, stdev=6918.33, samples=2 00:10:33.379 iops : min= 3072, max= 5518, avg=4295.00, stdev=1729.58, samples=2 00:10:33.379 lat (usec) : 1000=0.04% 00:10:33.379 lat (msec) : 2=0.01%, 4=0.88%, 10=15.43%, 20=65.77%, 50=16.40% 00:10:33.379 lat (msec) : 100=1.48% 00:10:33.379 cpu : usr=3.47%, sys=4.26%, ctx=562, majf=0, minf=2 00:10:33.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:33.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.379 issued rwts: total=4096,4422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.379 job3: (groupid=0, jobs=1): err= 0: pid=188570: Fri Dec 13 12:16:00 2024 00:10:33.379 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:33.379 slat (nsec): min=1322, max=13567k, avg=124509.36, stdev=792167.81 00:10:33.379 clat (usec): min=4099, max=96846, avg=16834.90, stdev=11207.21 00:10:33.379 lat (msec): min=4, max=103, avg=16.96, stdev=11.27 00:10:33.379 clat percentiles (usec): 00:10:33.379 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[10814], 00:10:33.379 | 30.00th=[11338], 40.00th=[12649], 50.00th=[14615], 60.00th=[15401], 00:10:33.379 | 70.00th=[16188], 80.00th=[17695], 90.00th=[24249], 95.00th=[36963], 00:10:33.379 | 99.00th=[71828], 99.50th=[88605], 99.90th=[96994], 99.95th=[96994], 00:10:33.379 | 99.99th=[96994] 00:10:33.379 write: IOPS=3359, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1008msec); 0 zone resets 00:10:33.379 slat (usec): min=2, max=10451, avg=176.62, stdev=913.07 00:10:33.379 clat (usec): min=1752, max=113681, avg=22497.63, stdev=23834.01 00:10:33.379 lat (usec): min=1765, max=113691, avg=22674.26, stdev=24006.64 00:10:33.379 clat percentiles (msec): 00:10:33.379 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 11], 00:10:33.379 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 17], 00:10:33.379 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 52], 95.00th=[ 95], 00:10:33.379 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 114], 99.95th=[ 114], 00:10:33.379 | 99.99th=[ 114] 00:10:33.379 bw ( KiB/s): min= 6336, max=19728, per=18.45%, avg=13032.00, stdev=9469.57, samples=2 00:10:33.379 iops : min= 1584, max= 4932, avg=3258.00, stdev=2367.39, samples=2 00:10:33.379 lat (msec) : 2=0.42%, 4=1.08%, 10=9.76%, 20=66.20%, 50=15.47% 00:10:33.379 lat (msec) : 100=5.62%, 250=1.46% 00:10:33.379 cpu : usr=2.98%, sys=3.77%, ctx=405, majf=0, minf=2 00:10:33.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:33.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.379 issued rwts: total=3072,3386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.379 00:10:33.379 Run status group 0 (all jobs): 00:10:33.379 READ: bw=64.3MiB/s (67.4MB/s), 11.9MiB/s-20.4MiB/s (12.5MB/s-21.4MB/s), io=67.5MiB (70.8MB), run=1008-1051msec 00:10:33.379 WRITE: bw=69.0MiB/s (72.3MB/s), 13.1MiB/s-20.9MiB/s (13.8MB/s-21.9MB/s), io=72.5MiB (76.0MB), run=1008-1051msec 00:10:33.379 00:10:33.379 Disk stats (read/write): 00:10:33.379 nvme0n1: ios=4525/4608, merge=0/0, ticks=52175/52829, in_queue=105004, util=98.20% 00:10:33.379 nvme0n2: ios=3856/4096, merge=0/0, ticks=38996/52434, in_queue=91430, util=91.78% 00:10:33.379 nvme0n3: ios=3072/3584, merge=0/0, ticks=35032/40997, in_queue=76029, util=88.98% 00:10:33.379 nvme0n4: ios=3072/3167, merge=0/0, ticks=27450/30568, in_queue=58018, util=89.73% 00:10:33.379 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:33.379 [global] 00:10:33.379 thread=1 00:10:33.379 invalidate=1 00:10:33.379 rw=randwrite 00:10:33.379 time_based=1 00:10:33.379 runtime=1 00:10:33.379 ioengine=libaio 00:10:33.379 direct=1 00:10:33.379 bs=4096 00:10:33.379 iodepth=128 00:10:33.379 norandommap=0 00:10:33.379 numjobs=1 00:10:33.379 00:10:33.379 verify_dump=1 00:10:33.379 verify_backlog=512 00:10:33.379 verify_state_save=0 00:10:33.379 do_verify=1 00:10:33.379 verify=crc32c-intel 00:10:33.379 [job0] 00:10:33.379 filename=/dev/nvme0n1 00:10:33.379 [job1] 00:10:33.379 filename=/dev/nvme0n2 00:10:33.379 [job2] 00:10:33.379 filename=/dev/nvme0n3 00:10:33.379 [job3] 00:10:33.379 filename=/dev/nvme0n4 00:10:33.379 Could not set queue depth (nvme0n1) 00:10:33.379 Could not set queue depth (nvme0n2) 00:10:33.379 Could not set queue depth (nvme0n3) 00:10:33.379 Could not set queue depth (nvme0n4) 00:10:33.637 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.637 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.637 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.637 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.637 fio-3.35 00:10:33.637 Starting 4 threads 00:10:35.016 00:10:35.016 job0: (groupid=0, jobs=1): err= 0: pid=188930: Fri Dec 13 12:16:02 2024 00:10:35.016 read: IOPS=4180, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1003msec) 00:10:35.016 slat (nsec): min=1121, max=13393k, avg=114119.37, stdev=662400.21 00:10:35.016 clat (usec): min=2099, max=44721, avg=15211.10, stdev=5831.74 00:10:35.016 lat (usec): min=4522, max=44751, avg=15325.22, stdev=5852.82 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10552], 00:10:35.016 | 30.00th=[11076], 40.00th=[12256], 50.00th=[15664], 60.00th=[16909], 00:10:35.016 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18744], 95.00th=[27132], 00:10:35.016 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[40109], 00:10:35.016 | 99.99th=[44827] 00:10:35.016 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:35.016 slat (usec): min=2, max=20049, avg=107.14, stdev=781.07 00:10:35.016 clat (usec): min=2883, max=52296, avg=13684.32, stdev=6333.61 00:10:35.016 lat (usec): min=2893, max=52305, avg=13791.46, stdev=6407.32 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 6063], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[ 9634], 00:10:35.016 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11863], 60.00th=[12256], 00:10:35.016 | 70.00th=[14353], 80.00th=[16712], 90.00th=[21890], 95.00th=[25560], 00:10:35.016 | 99.00th=[37487], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:10:35.016 | 99.99th=[52167] 00:10:35.016 bw ( KiB/s): min=18296, max=18328, per=28.21%, avg=18312.00, stdev=22.63, samples=2 00:10:35.016 iops : min= 4574, max= 4582, avg=4578.00, stdev= 5.66, samples=2 00:10:35.016 lat (msec) : 4=0.48%, 10=21.62%, 20=67.84%, 50=10.04%, 100=0.01% 00:10:35.016 cpu : usr=3.49%, sys=5.29%, ctx=367, majf=0, minf=1 00:10:35.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:35.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.016 issued rwts: total=4193,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.016 job1: (groupid=0, jobs=1): err= 0: pid=188931: Fri Dec 13 12:16:02 2024 00:10:35.016 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:35.016 slat (nsec): min=1452, max=25038k, avg=112325.14, stdev=765932.25 00:10:35.016 clat (usec): min=7439, max=58976, avg=14746.44, stdev=7403.78 00:10:35.016 lat (usec): min=7443, max=58985, avg=14858.77, stdev=7445.30 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10290], 00:10:35.016 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12387], 60.00th=[14746], 00:10:35.016 | 70.00th=[16188], 80.00th=[17171], 90.00th=[19006], 95.00th=[24249], 00:10:35.016 | 99.00th=[50594], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:10:35.016 | 99.99th=[58983] 00:10:35.016 write: IOPS=4173, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1006msec); 0 zone resets 00:10:35.016 slat (nsec): min=1995, max=13624k, avg=123016.93, stdev=696125.59 00:10:35.016 clat (usec): min=4024, max=48023, avg=16002.76, stdev=7536.65 00:10:35.016 lat (usec): min=5732, max=48028, avg=16125.78, stdev=7587.98 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9372], 00:10:35.016 | 30.00th=[ 9765], 40.00th=[13960], 50.00th=[15926], 60.00th=[16909], 00:10:35.016 | 70.00th=[17171], 80.00th=[19792], 90.00th=[25035], 95.00th=[30802], 00:10:35.016 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:10:35.016 | 99.99th=[47973] 00:10:35.016 bw ( KiB/s): min=15472, max=17296, per=25.24%, avg=16384.00, stdev=1289.76, samples=2 00:10:35.016 iops : min= 3868, max= 4324, avg=4096.00, stdev=322.44, samples=2 00:10:35.016 lat (msec) : 10=23.23%, 20=63.45%, 50=12.19%, 100=1.13% 00:10:35.016 cpu : usr=3.48%, sys=4.48%, ctx=412, majf=0, minf=1 00:10:35.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:35.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.016 issued rwts: total=4096,4199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.016 job2: (groupid=0, jobs=1): err= 0: pid=188932: Fri Dec 13 12:16:02 2024 00:10:35.016 read: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1005msec) 00:10:35.016 slat (nsec): min=1878, max=10287k, avg=116037.30, stdev=727995.86 00:10:35.016 clat (usec): min=3657, max=42403, avg=13549.88, stdev=5746.64 00:10:35.016 lat (usec): min=6019, max=42414, avg=13665.92, stdev=5812.10 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 7963], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10028], 00:10:35.016 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12518], 60.00th=[12518], 00:10:35.016 | 70.00th=[12780], 80.00th=[14091], 90.00th=[19530], 95.00th=[26608], 00:10:35.016 | 99.00th=[38011], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:10:35.016 | 99.99th=[42206] 00:10:35.016 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:10:35.016 slat (usec): min=2, max=21880, avg=160.48, stdev=934.12 00:10:35.016 clat (usec): min=4034, max=60156, avg=22337.93, stdev=11775.99 00:10:35.016 lat (usec): min=4043, max=60187, avg=22498.41, stdev=11867.97 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 5669], 5.00th=[ 7242], 10.00th=[ 9241], 20.00th=[10683], 00:10:35.016 | 30.00th=[11338], 40.00th=[15401], 50.00th=[20317], 60.00th=[28705], 00:10:35.016 | 70.00th=[31589], 80.00th=[34866], 90.00th=[38011], 95.00th=[39060], 00:10:35.016 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[59507], 00:10:35.016 | 99.99th=[60031] 00:10:35.016 bw ( KiB/s): min=12288, max=16384, per=22.08%, avg=14336.00, stdev=2896.31, samples=2 00:10:35.016 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:35.016 lat (msec) : 4=0.01%, 10=18.34%, 20=51.43%, 50=30.18%, 100=0.04% 00:10:35.016 cpu : usr=3.69%, sys=4.38%, ctx=340, majf=0, minf=1 00:10:35.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:35.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.016 issued rwts: total=3428,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.016 job3: (groupid=0, jobs=1): err= 0: pid=188933: Fri Dec 13 12:16:02 2024 00:10:35.016 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:35.016 slat (nsec): min=1158, max=43234k, avg=139266.49, stdev=999049.80 00:10:35.016 clat (usec): min=4439, max=59980, avg=17105.46, stdev=5305.04 00:10:35.016 lat (usec): min=4444, max=59990, avg=17244.72, stdev=5329.41 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 6783], 5.00th=[10552], 10.00th=[11207], 20.00th=[14091], 00:10:35.016 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:10:35.016 | 70.00th=[17695], 80.00th=[18482], 90.00th=[21627], 95.00th=[23725], 00:10:35.016 | 99.00th=[31327], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:10:35.016 | 99.99th=[60031] 00:10:35.016 write: IOPS=3924, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1003msec); 0 zone resets 00:10:35.016 slat (usec): min=2, max=9107, avg=120.31, stdev=458.79 00:10:35.016 clat (usec): min=1224, max=59774, avg=16749.00, stdev=7550.15 00:10:35.016 lat (usec): min=1234, max=59782, avg=16869.32, stdev=7563.70 00:10:35.016 clat percentiles (usec): 00:10:35.016 | 1.00th=[ 4883], 5.00th=[10028], 10.00th=[10421], 20.00th=[11207], 00:10:35.016 | 30.00th=[15139], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:10:35.016 | 70.00th=[17695], 80.00th=[18482], 90.00th=[20579], 95.00th=[24249], 00:10:35.016 | 99.00th=[57410], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:10:35.016 | 99.99th=[60031] 00:10:35.016 bw ( KiB/s): min=13256, max=17216, per=23.47%, avg=15236.00, stdev=2800.14, samples=2 00:10:35.016 iops : min= 3314, max= 4304, avg=3809.00, stdev=700.04, samples=2 00:10:35.016 lat (msec) : 2=0.03%, 4=0.20%, 10=4.72%, 20=82.23%, 50=11.13% 00:10:35.016 lat (msec) : 100=1.69% 00:10:35.016 cpu : usr=1.70%, sys=4.29%, ctx=561, majf=0, minf=2 00:10:35.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:35.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.016 issued rwts: total=3584,3936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.016 00:10:35.016 Run status group 0 (all jobs): 00:10:35.016 READ: bw=59.4MiB/s (62.3MB/s), 13.3MiB/s-16.3MiB/s (14.0MB/s-17.1MB/s), io=59.8MiB (62.7MB), run=1003-1006msec 00:10:35.016 WRITE: bw=63.4MiB/s (66.5MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=63.8MiB (66.9MB), run=1003-1006msec 00:10:35.016 00:10:35.016 Disk stats (read/write): 00:10:35.016 nvme0n1: ios=3634/3786, merge=0/0, ticks=19512/21602, in_queue=41114, util=97.29% 00:10:35.016 nvme0n2: ios=3094/3584, merge=0/0, ticks=20989/25642, in_queue=46631, util=100.00% 00:10:35.016 nvme0n3: ios=3107/3143, merge=0/0, ticks=36777/56566, in_queue=93343, util=99.17% 00:10:35.016 nvme0n4: ios=3118/3584, merge=0/0, ticks=18160/21480, in_queue=39640, util=98.85% 00:10:35.017 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:35.017 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=189162 00:10:35.017 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:35.017 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:35.017 [global] 00:10:35.017 thread=1 00:10:35.017 invalidate=1 00:10:35.017 rw=read 00:10:35.017 time_based=1 00:10:35.017 runtime=10 00:10:35.017 ioengine=libaio 00:10:35.017 direct=1 00:10:35.017 bs=4096 00:10:35.017 iodepth=1 00:10:35.017 norandommap=1 00:10:35.017 numjobs=1 00:10:35.017 00:10:35.017 [job0] 00:10:35.017 filename=/dev/nvme0n1 00:10:35.017 [job1] 00:10:35.017 filename=/dev/nvme0n2 00:10:35.017 [job2] 00:10:35.017 filename=/dev/nvme0n3 00:10:35.017 [job3] 00:10:35.017 filename=/dev/nvme0n4 00:10:35.017 Could not set queue depth (nvme0n1) 00:10:35.017 Could not set queue depth (nvme0n2) 00:10:35.017 Could not set queue depth (nvme0n3) 00:10:35.017 Could not set queue depth (nvme0n4) 00:10:35.276 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.276 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.276 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.276 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.276 fio-3.35 00:10:35.276 Starting 4 threads 00:10:38.568 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:38.568 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:38.568 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:10:38.568 fio: pid=189307, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:38.568 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=53694464, buflen=4096 00:10:38.568 fio: pid=189306, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:38.568 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.568 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:38.568 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.568 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:38.568 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=352256, buflen=4096 00:10:38.568 fio: pid=189304, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:38.828 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1085440, buflen=4096 00:10:38.828 fio: pid=189305, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:38.828 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.828 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:38.828 00:10:38.828 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189304: Fri Dec 13 12:16:06 2024 00:10:38.828 read: IOPS=27, BW=109KiB/s (112kB/s)(344KiB/3142msec) 00:10:38.828 slat (usec): min=6, max=22792, avg=393.71, stdev=2642.62 00:10:38.828 clat (usec): min=194, max=42377, avg=35883.22, stdev=13708.39 00:10:38.828 lat (usec): min=203, max=63950, avg=36168.45, stdev=14029.06 00:10:38.828 clat percentiles (usec): 00:10:38.828 | 1.00th=[ 194], 5.00th=[ 229], 10.00th=[ 420], 20.00th=[40633], 00:10:38.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:38.828 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:38.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.828 | 99.99th=[42206] 00:10:38.828 bw ( KiB/s): min= 93, max= 152, per=0.68%, avg=110.17, stdev=21.56, samples=6 00:10:38.828 iops : min= 23, max= 38, avg=27.50, stdev= 5.43, samples=6 00:10:38.828 lat (usec) : 250=8.05%, 500=2.30%, 750=2.30% 00:10:38.828 lat (msec) : 50=86.21% 00:10:38.828 cpu : usr=0.10%, sys=0.00%, ctx=90, majf=0, minf=1 00:10:38.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.828 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=189305: Fri Dec 13 12:16:06 2024 00:10:38.828 read: IOPS=79, BW=318KiB/s (325kB/s)(1060KiB/3337msec) 00:10:38.828 slat (usec): min=6, max=11878, avg=156.32, stdev=1197.78 00:10:38.828 clat (usec): min=226, max=42168, avg=12432.07, stdev=18725.31 00:10:38.828 lat (usec): min=232, max=52989, avg=12562.75, stdev=18947.55 00:10:38.828 clat percentiles (usec): 00:10:38.828 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 243], 00:10:38.828 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:10:38.828 | 70.00th=[ 412], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:38.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.828 | 99.99th=[42206] 00:10:38.828 bw ( KiB/s): min= 96, max= 1368, per=2.12%, avg=343.17, stdev=508.86, samples=6 00:10:38.828 iops : min= 24, max= 342, avg=85.67, stdev=127.29, samples=6 00:10:38.828 lat (usec) : 250=34.21%, 500=35.71% 00:10:38.828 lat (msec) : 50=29.70% 00:10:38.828 cpu : usr=0.00%, sys=0.33%, ctx=269, majf=0, minf=2 00:10:38.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 issued rwts: total=266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.828 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189306: Fri Dec 13 12:16:06 2024 00:10:38.828 read: IOPS=4506, BW=17.6MiB/s (18.5MB/s)(51.2MiB/2909msec) 00:10:38.828 slat (usec): min=7, max=9568, avg= 9.65, stdev=104.34 00:10:38.828 clat (usec): min=156, max=586, avg=208.76, stdev=17.45 00:10:38.828 lat (usec): min=163, max=9980, avg=218.40, stdev=108.30 00:10:38.828 clat percentiles (usec): 00:10:38.828 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:10:38.828 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 210], 00:10:38.828 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 241], 00:10:38.828 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 302], 99.95th=[ 412], 00:10:38.828 | 99.99th=[ 586] 00:10:38.828 bw ( KiB/s): min=17160, max=18624, per=100.00%, avg=18286.40, stdev=633.07, samples=5 00:10:38.828 iops : min= 4290, max= 4656, avg=4571.60, stdev=158.27, samples=5 00:10:38.828 lat (usec) : 250=97.12%, 500=2.84%, 750=0.03% 00:10:38.828 cpu : usr=2.30%, sys=7.46%, ctx=13113, majf=0, minf=2 00:10:38.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 issued rwts: total=13110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.828 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189307: Fri Dec 13 12:16:06 2024 00:10:38.828 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2730msec) 00:10:38.828 slat (nsec): min=12500, max=44668, avg=23683.43, stdev=3197.47 00:10:38.828 clat (usec): min=831, max=41943, avg=40398.85, stdev=4909.59 00:10:38.828 lat (usec): min=876, max=41967, avg=40422.51, stdev=4907.00 00:10:38.828 clat percentiles (usec): 00:10:38.828 | 1.00th=[ 832], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:38.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:38.828 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:38.828 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.828 | 99.99th=[42206] 00:10:38.828 bw ( KiB/s): min= 96, max= 104, per=0.61%, avg=99.20, stdev= 4.38, samples=5 00:10:38.828 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:38.828 lat (usec) : 1000=1.47% 00:10:38.828 lat (msec) : 50=97.06% 00:10:38.828 cpu : usr=0.15%, sys=0.00%, ctx=69, majf=0, minf=1 00:10:38.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.828 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.828 00:10:38.828 Run status group 0 (all jobs): 00:10:38.828 READ: bw=15.8MiB/s (16.6MB/s), 98.2KiB/s-17.6MiB/s (101kB/s-18.5MB/s), io=52.8MiB (55.4MB), run=2730-3337msec 00:10:38.828 00:10:38.828 Disk stats (read/write): 00:10:38.828 nvme0n1: ios=85/0, merge=0/0, ticks=3046/0, in_queue=3046, util=95.07% 00:10:38.828 nvme0n2: ios=260/0, merge=0/0, ticks=3090/0, in_queue=3090, util=95.85% 00:10:38.828 nvme0n3: ios=12996/0, merge=0/0, ticks=2764/0, in_queue=2764, util=98.65% 00:10:38.828 nvme0n4: ios=86/0, merge=0/0, ticks=2760/0, in_queue=2760, util=98.96% 00:10:39.088 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.088 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:39.088 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.088 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:39.347 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.347 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:39.606 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.606 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 189162 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:39.865 nvmf hotplug test: fio failed as expected 00:10:39.865 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.125 rmmod nvme_tcp 00:10:40.125 rmmod nvme_fabrics 00:10:40.125 rmmod nvme_keyring 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 186300 ']' 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 186300 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 186300 ']' 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 186300 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:40.125 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.385 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186300 00:10:40.385 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.385 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.385 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186300' 00:10:40.385 killing process with pid 186300 00:10:40.385 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 186300 00:10:40.385 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 186300 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.385 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.925 00:10:42.925 real 0m26.853s 00:10:42.925 user 1m47.474s 00:10:42.925 sys 0m8.259s 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.925 ************************************ 00:10:42.925 END TEST nvmf_fio_target 00:10:42.925 ************************************ 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.925 ************************************ 00:10:42.925 START TEST nvmf_bdevio 00:10:42.925 ************************************ 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:42.925 * Looking for test storage... 00:10:42.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:42.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.925 --rc genhtml_branch_coverage=1 00:10:42.925 --rc genhtml_function_coverage=1 00:10:42.925 --rc genhtml_legend=1 00:10:42.925 --rc geninfo_all_blocks=1 00:10:42.925 --rc geninfo_unexecuted_blocks=1 00:10:42.925 00:10:42.925 ' 00:10:42.925 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:42.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.925 --rc genhtml_branch_coverage=1 00:10:42.925 --rc genhtml_function_coverage=1 00:10:42.925 --rc genhtml_legend=1 00:10:42.926 --rc geninfo_all_blocks=1 00:10:42.926 --rc geninfo_unexecuted_blocks=1 00:10:42.926 00:10:42.926 ' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:42.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.926 --rc genhtml_branch_coverage=1 00:10:42.926 --rc genhtml_function_coverage=1 00:10:42.926 --rc genhtml_legend=1 00:10:42.926 --rc geninfo_all_blocks=1 00:10:42.926 --rc geninfo_unexecuted_blocks=1 00:10:42.926 00:10:42.926 ' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:42.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.926 --rc genhtml_branch_coverage=1 00:10:42.926 --rc genhtml_function_coverage=1 00:10:42.926 --rc genhtml_legend=1 00:10:42.926 --rc geninfo_all_blocks=1 00:10:42.926 --rc geninfo_unexecuted_blocks=1 00:10:42.926 00:10:42.926 ' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.926 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:49.503 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:49.503 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:49.503 Found net devices under 0000:af:00.0: cvl_0_0 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:49.503 Found net devices under 0000:af:00.1: cvl_0_1 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:10:49.503 00:10:49.503 --- 10.0.0.2 ping statistics --- 00:10:49.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.503 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:10:49.503 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:49.503 00:10:49.503 --- 10.0.0.1 ping statistics --- 00:10:49.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.504 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=193688 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 193688 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 193688 ']' 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 [2024-12-13 12:16:16.355678] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:49.504 [2024-12-13 12:16:16.355724] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.504 [2024-12-13 12:16:16.429890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.504 [2024-12-13 12:16:16.452621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.504 [2024-12-13 12:16:16.452658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.504 [2024-12-13 12:16:16.452664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.504 [2024-12-13 12:16:16.452670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.504 [2024-12-13 12:16:16.452675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.504 [2024-12-13 12:16:16.454191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:49.504 [2024-12-13 12:16:16.454300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:49.504 [2024-12-13 12:16:16.454411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.504 [2024-12-13 12:16:16.454412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 [2024-12-13 12:16:16.586384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 Malloc0 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 [2024-12-13 12:16:16.656625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:49.504 { 00:10:49.504 "params": { 00:10:49.504 "name": "Nvme$subsystem", 00:10:49.504 "trtype": "$TEST_TRANSPORT", 00:10:49.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.504 "adrfam": "ipv4", 00:10:49.504 "trsvcid": "$NVMF_PORT", 00:10:49.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.504 "hdgst": ${hdgst:-false}, 00:10:49.504 "ddgst": ${ddgst:-false} 00:10:49.504 }, 00:10:49.504 "method": "bdev_nvme_attach_controller" 00:10:49.504 } 00:10:49.504 EOF 00:10:49.504 )") 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:49.504 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:49.504 "params": { 00:10:49.504 "name": "Nvme1", 00:10:49.504 "trtype": "tcp", 00:10:49.504 "traddr": "10.0.0.2", 00:10:49.504 "adrfam": "ipv4", 00:10:49.504 "trsvcid": "4420", 00:10:49.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.504 "hdgst": false, 00:10:49.504 "ddgst": false 00:10:49.504 }, 00:10:49.504 "method": "bdev_nvme_attach_controller" 00:10:49.504 }' 00:10:49.504 [2024-12-13 12:16:16.705899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:49.504 [2024-12-13 12:16:16.705940] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193716 ] 00:10:49.504 [2024-12-13 12:16:16.782187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.504 [2024-12-13 12:16:16.807277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.504 [2024-12-13 12:16:16.807385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.504 [2024-12-13 12:16:16.807386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.504 I/O targets: 00:10:49.504 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:49.504 00:10:49.504 00:10:49.504 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.504 http://cunit.sourceforge.net/ 00:10:49.504 00:10:49.504 00:10:49.504 Suite: bdevio tests on: Nvme1n1 00:10:49.504 Test: blockdev write read block ...passed 00:10:49.504 Test: blockdev write zeroes read block ...passed 00:10:49.504 Test: blockdev write zeroes read no split ...passed 00:10:49.504 Test: blockdev write zeroes read split ...passed 00:10:49.504 Test: blockdev write zeroes read split partial ...passed 00:10:49.504 Test: blockdev reset ...[2024-12-13 12:16:17.201118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:49.504 [2024-12-13 12:16:17.201177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b9630 (9): Bad file descriptor 00:10:49.764 [2024-12-13 12:16:17.254540] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:49.764 passed 00:10:49.764 Test: blockdev write read 8 blocks ...passed 00:10:49.764 Test: blockdev write read size > 128k ...passed 00:10:49.764 Test: blockdev write read invalid size ...passed 00:10:49.764 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:49.764 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:49.764 Test: blockdev write read max offset ...passed 00:10:49.764 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:49.764 Test: blockdev writev readv 8 blocks ...passed 00:10:49.764 Test: blockdev writev readv 30 x 1block ...passed 00:10:49.764 Test: blockdev writev readv block ...passed 00:10:49.764 Test: blockdev writev readv size > 128k ...passed 00:10:49.764 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:49.764 Test: blockdev comparev and writev ...[2024-12-13 12:16:17.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.424549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.424563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.424571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.424814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.424825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.424836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.424848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.425078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.425088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.425099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.425106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.425324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.425334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:49.764 [2024-12-13 12:16:17.425345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.764 [2024-12-13 12:16:17.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:50.024 passed 00:10:50.024 Test: blockdev nvme passthru rw ...passed 00:10:50.024 Test: blockdev nvme passthru vendor specific ...[2024-12-13 12:16:17.507127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.024 [2024-12-13 12:16:17.507144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:50.024 [2024-12-13 12:16:17.507248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.024 [2024-12-13 12:16:17.507257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:50.024 [2024-12-13 12:16:17.507355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.024 [2024-12-13 12:16:17.507364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:50.024 [2024-12-13 12:16:17.507467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.024 [2024-12-13 12:16:17.507475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:50.024 passed 00:10:50.024 Test: blockdev nvme admin passthru ...passed 00:10:50.024 Test: blockdev copy ...passed 00:10:50.024 00:10:50.024 Run Summary: Type Total Ran Passed Failed Inactive 00:10:50.024 suites 1 1 n/a 0 0 00:10:50.024 tests 23 23 23 0 0 00:10:50.024 asserts 152 152 152 0 n/a 00:10:50.024 00:10:50.024 Elapsed time = 0.961 seconds 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.024 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.024 rmmod nvme_tcp 00:10:50.284 rmmod nvme_fabrics 00:10:50.284 rmmod nvme_keyring 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 193688 ']' 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 193688 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 193688 ']' 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 193688 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193688 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193688' 00:10:50.284 killing process with pid 193688 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 193688 00:10:50.284 12:16:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 193688 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.544 12:16:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.451 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.451 00:10:52.451 real 0m9.901s 00:10:52.451 user 0m10.027s 00:10:52.451 sys 0m4.912s 00:10:52.451 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.451 12:16:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.451 ************************************ 00:10:52.451 END TEST nvmf_bdevio 00:10:52.451 ************************************ 00:10:52.451 12:16:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:52.451 00:10:52.451 real 4m33.487s 00:10:52.451 user 10m26.887s 00:10:52.451 sys 1m36.641s 00:10:52.451 12:16:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.452 12:16:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.452 ************************************ 00:10:52.452 END TEST nvmf_target_core 00:10:52.452 ************************************ 00:10:52.712 12:16:20 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:52.712 12:16:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.712 12:16:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.712 12:16:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.712 ************************************ 00:10:52.712 START TEST nvmf_target_extra 00:10:52.712 ************************************ 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:52.712 * Looking for test storage... 00:10:52.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.712 --rc genhtml_branch_coverage=1 00:10:52.712 --rc genhtml_function_coverage=1 00:10:52.712 --rc genhtml_legend=1 00:10:52.712 --rc geninfo_all_blocks=1 00:10:52.712 --rc geninfo_unexecuted_blocks=1 00:10:52.712 00:10:52.712 ' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.712 --rc genhtml_branch_coverage=1 00:10:52.712 --rc genhtml_function_coverage=1 00:10:52.712 --rc genhtml_legend=1 00:10:52.712 --rc geninfo_all_blocks=1 00:10:52.712 --rc geninfo_unexecuted_blocks=1 00:10:52.712 00:10:52.712 ' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.712 --rc genhtml_branch_coverage=1 00:10:52.712 --rc genhtml_function_coverage=1 00:10:52.712 --rc genhtml_legend=1 00:10:52.712 --rc geninfo_all_blocks=1 00:10:52.712 --rc geninfo_unexecuted_blocks=1 00:10:52.712 00:10:52.712 ' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.712 --rc genhtml_branch_coverage=1 00:10:52.712 --rc genhtml_function_coverage=1 00:10:52.712 --rc genhtml_legend=1 00:10:52.712 --rc geninfo_all_blocks=1 00:10:52.712 --rc geninfo_unexecuted_blocks=1 00:10:52.712 00:10:52.712 ' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.712 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.713 12:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.973 ************************************ 00:10:52.973 START TEST nvmf_example 00:10:52.973 ************************************ 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:52.973 * Looking for test storage... 00:10:52.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.973 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.974 --rc genhtml_branch_coverage=1 00:10:52.974 --rc genhtml_function_coverage=1 00:10:52.974 --rc genhtml_legend=1 00:10:52.974 --rc geninfo_all_blocks=1 00:10:52.974 --rc geninfo_unexecuted_blocks=1 00:10:52.974 00:10:52.974 ' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.974 --rc genhtml_branch_coverage=1 00:10:52.974 --rc genhtml_function_coverage=1 00:10:52.974 --rc genhtml_legend=1 00:10:52.974 --rc geninfo_all_blocks=1 00:10:52.974 --rc geninfo_unexecuted_blocks=1 00:10:52.974 00:10:52.974 ' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.974 --rc genhtml_branch_coverage=1 00:10:52.974 --rc genhtml_function_coverage=1 00:10:52.974 --rc genhtml_legend=1 00:10:52.974 --rc geninfo_all_blocks=1 00:10:52.974 --rc geninfo_unexecuted_blocks=1 00:10:52.974 00:10:52.974 ' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.974 --rc genhtml_branch_coverage=1 00:10:52.974 --rc genhtml_function_coverage=1 00:10:52.974 --rc genhtml_legend=1 00:10:52.974 --rc geninfo_all_blocks=1 00:10:52.974 --rc geninfo_unexecuted_blocks=1 00:10:52.974 00:10:52.974 ' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.974 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:59.550 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:59.550 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:59.550 Found net devices under 0000:af:00.0: cvl_0_0 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:59.550 Found net devices under 0000:af:00.1: cvl_0_1 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:10:59.550 00:10:59.550 --- 10.0.0.2 ping statistics --- 00:10:59.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.550 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:59.550 00:10:59.550 --- 10.0.0.1 ping statistics --- 00:10:59.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.550 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:59.550 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=197469 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 197469 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 197469 ']' 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.551 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:00.120 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:10.107 Initializing NVMe Controllers 00:11:10.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:10.107 Initialization complete. Launching workers. 00:11:10.107 ======================================================== 00:11:10.107 Latency(us) 00:11:10.107 Device Information : IOPS MiB/s Average min max 00:11:10.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18407.86 71.91 3477.02 682.35 16059.79 00:11:10.107 ======================================================== 00:11:10.107 Total : 18407.86 71.91 3477.02 682.35 16059.79 00:11:10.107 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.107 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.367 rmmod nvme_tcp 00:11:10.367 rmmod nvme_fabrics 00:11:10.367 rmmod nvme_keyring 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 197469 ']' 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 197469 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 197469 ']' 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 197469 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197469 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197469' 00:11:10.367 killing process with pid 197469 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 197469 00:11:10.367 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 197469 00:11:10.628 nvmf threads initialize successfully 00:11:10.628 bdev subsystem init successfully 00:11:10.628 created a nvmf target service 00:11:10.628 create targets's poll groups done 00:11:10.628 all subsystems of target started 00:11:10.628 nvmf target is running 00:11:10.628 all subsystems of target stopped 00:11:10.628 destroy targets's poll groups done 00:11:10.628 destroyed the nvmf target service 00:11:10.628 bdev subsystem finish successfully 00:11:10.628 nvmf threads destroy successfully 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.628 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.536 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.537 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:12.537 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.537 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.537 00:11:12.537 real 0m19.782s 00:11:12.537 user 0m45.966s 00:11:12.537 sys 0m6.055s 00:11:12.537 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.537 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.537 ************************************ 00:11:12.537 END TEST nvmf_example 00:11:12.537 ************************************ 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.797 ************************************ 00:11:12.797 START TEST nvmf_filesystem 00:11:12.797 ************************************ 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:12.797 * Looking for test storage... 00:11:12.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.797 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.798 --rc genhtml_branch_coverage=1 00:11:12.798 --rc genhtml_function_coverage=1 00:11:12.798 --rc genhtml_legend=1 00:11:12.798 --rc geninfo_all_blocks=1 00:11:12.798 --rc geninfo_unexecuted_blocks=1 00:11:12.798 00:11:12.798 ' 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.798 --rc genhtml_branch_coverage=1 00:11:12.798 --rc genhtml_function_coverage=1 00:11:12.798 --rc genhtml_legend=1 00:11:12.798 --rc geninfo_all_blocks=1 00:11:12.798 --rc geninfo_unexecuted_blocks=1 00:11:12.798 00:11:12.798 ' 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.798 --rc genhtml_branch_coverage=1 00:11:12.798 --rc genhtml_function_coverage=1 00:11:12.798 --rc genhtml_legend=1 00:11:12.798 --rc geninfo_all_blocks=1 00:11:12.798 --rc geninfo_unexecuted_blocks=1 00:11:12.798 00:11:12.798 ' 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.798 --rc genhtml_branch_coverage=1 00:11:12.798 --rc genhtml_function_coverage=1 00:11:12.798 --rc genhtml_legend=1 00:11:12.798 --rc geninfo_all_blocks=1 00:11:12.798 --rc geninfo_unexecuted_blocks=1 00:11:12.798 00:11:12.798 ' 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:12.798 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:12.799 #define SPDK_CONFIG_H 00:11:12.799 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:12.799 #define SPDK_CONFIG_APPS 1 00:11:12.799 #define SPDK_CONFIG_ARCH native 00:11:12.799 #undef SPDK_CONFIG_ASAN 00:11:12.799 #undef SPDK_CONFIG_AVAHI 00:11:12.799 #undef SPDK_CONFIG_CET 00:11:12.799 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:12.799 #define SPDK_CONFIG_COVERAGE 1 00:11:12.799 #define SPDK_CONFIG_CROSS_PREFIX 00:11:12.799 #undef SPDK_CONFIG_CRYPTO 00:11:12.799 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:12.799 #undef SPDK_CONFIG_CUSTOMOCF 00:11:12.799 #undef SPDK_CONFIG_DAOS 00:11:12.799 #define SPDK_CONFIG_DAOS_DIR 00:11:12.799 #define SPDK_CONFIG_DEBUG 1 00:11:12.799 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:12.799 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:12.799 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:12.799 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:12.799 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:12.799 #undef SPDK_CONFIG_DPDK_UADK 00:11:12.799 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:12.799 #define SPDK_CONFIG_EXAMPLES 1 00:11:12.799 #undef SPDK_CONFIG_FC 00:11:12.799 #define SPDK_CONFIG_FC_PATH 00:11:12.799 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:12.799 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:12.799 #define SPDK_CONFIG_FSDEV 1 00:11:12.799 #undef SPDK_CONFIG_FUSE 00:11:12.799 #undef SPDK_CONFIG_FUZZER 00:11:12.799 #define SPDK_CONFIG_FUZZER_LIB 00:11:12.799 #undef SPDK_CONFIG_GOLANG 00:11:12.799 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:12.799 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:12.799 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:12.799 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:12.799 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:12.799 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:12.799 #undef SPDK_CONFIG_HAVE_LZ4 00:11:12.799 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:12.799 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:12.799 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:12.799 #define SPDK_CONFIG_IDXD 1 00:11:12.799 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:12.799 #undef SPDK_CONFIG_IPSEC_MB 00:11:12.799 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:12.799 #define SPDK_CONFIG_ISAL 1 00:11:12.799 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:12.799 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:12.799 #define SPDK_CONFIG_LIBDIR 00:11:12.799 #undef SPDK_CONFIG_LTO 00:11:12.799 #define SPDK_CONFIG_MAX_LCORES 128 00:11:12.799 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:12.799 #define SPDK_CONFIG_NVME_CUSE 1 00:11:12.799 #undef SPDK_CONFIG_OCF 00:11:12.799 #define SPDK_CONFIG_OCF_PATH 00:11:12.799 #define SPDK_CONFIG_OPENSSL_PATH 00:11:12.799 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:12.799 #define SPDK_CONFIG_PGO_DIR 00:11:12.799 #undef SPDK_CONFIG_PGO_USE 00:11:12.799 #define SPDK_CONFIG_PREFIX /usr/local 00:11:12.799 #undef SPDK_CONFIG_RAID5F 00:11:12.799 #undef SPDK_CONFIG_RBD 00:11:12.799 #define SPDK_CONFIG_RDMA 1 00:11:12.799 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:12.799 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:12.799 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:12.799 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:12.799 #define SPDK_CONFIG_SHARED 1 00:11:12.799 #undef SPDK_CONFIG_SMA 00:11:12.799 #define SPDK_CONFIG_TESTS 1 00:11:12.799 #undef SPDK_CONFIG_TSAN 00:11:12.799 #define SPDK_CONFIG_UBLK 1 00:11:12.799 #define SPDK_CONFIG_UBSAN 1 00:11:12.799 #undef SPDK_CONFIG_UNIT_TESTS 00:11:12.799 #undef SPDK_CONFIG_URING 00:11:12.799 #define SPDK_CONFIG_URING_PATH 00:11:12.799 #undef SPDK_CONFIG_URING_ZNS 00:11:12.799 #undef SPDK_CONFIG_USDT 00:11:12.799 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:12.799 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:12.799 #define SPDK_CONFIG_VFIO_USER 1 00:11:12.799 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:12.799 #define SPDK_CONFIG_VHOST 1 00:11:12.799 #define SPDK_CONFIG_VIRTIO 1 00:11:12.799 #undef SPDK_CONFIG_VTUNE 00:11:12.799 #define SPDK_CONFIG_VTUNE_DIR 00:11:12.799 #define SPDK_CONFIG_WERROR 1 00:11:12.799 #define SPDK_CONFIG_WPDK_DIR 00:11:12.799 #undef SPDK_CONFIG_XNVME 00:11:12.799 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.799 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:13.063 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:13.064 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 199825 ]] 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 199825 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ugsErR 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ugsErR/tests/target /tmp/spdk.ugsErR 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88900292608 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552413696 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6652121088 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766175744 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110486016 00:11:13.065 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23015424 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47776026624 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=180224 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:13.066 * Looking for test storage... 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88900292608 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8866713600 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.066 --rc genhtml_branch_coverage=1 00:11:13.066 --rc genhtml_function_coverage=1 00:11:13.066 --rc genhtml_legend=1 00:11:13.066 --rc geninfo_all_blocks=1 00:11:13.066 --rc geninfo_unexecuted_blocks=1 00:11:13.066 00:11:13.066 ' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.066 --rc genhtml_branch_coverage=1 00:11:13.066 --rc genhtml_function_coverage=1 00:11:13.066 --rc genhtml_legend=1 00:11:13.066 --rc geninfo_all_blocks=1 00:11:13.066 --rc geninfo_unexecuted_blocks=1 00:11:13.066 00:11:13.066 ' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.066 --rc genhtml_branch_coverage=1 00:11:13.066 --rc genhtml_function_coverage=1 00:11:13.066 --rc genhtml_legend=1 00:11:13.066 --rc geninfo_all_blocks=1 00:11:13.066 --rc geninfo_unexecuted_blocks=1 00:11:13.066 00:11:13.066 ' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.066 --rc genhtml_branch_coverage=1 00:11:13.066 --rc genhtml_function_coverage=1 00:11:13.066 --rc genhtml_legend=1 00:11:13.066 --rc geninfo_all_blocks=1 00:11:13.066 --rc geninfo_unexecuted_blocks=1 00:11:13.066 00:11:13.066 ' 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.066 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.067 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:19.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:19.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:19.642 Found net devices under 0000:af:00.0: cvl_0_0 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:19.642 Found net devices under 0000:af:00.1: cvl_0_1 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.642 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:11:19.643 00:11:19.643 --- 10.0.0.2 ping statistics --- 00:11:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.643 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:11:19.643 00:11:19.643 --- 10.0.0.1 ping statistics --- 00:11:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.643 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 ************************************ 00:11:19.643 START TEST nvmf_filesystem_no_in_capsule 00:11:19.643 ************************************ 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=203024 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 203024 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 203024 ']' 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.643 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 [2024-12-13 12:16:46.823517] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:19.643 [2024-12-13 12:16:46.823563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.643 [2024-12-13 12:16:46.900669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.643 [2024-12-13 12:16:46.924272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.643 [2024-12-13 12:16:46.924310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.643 [2024-12-13 12:16:46.924317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.643 [2024-12-13 12:16:46.924322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.643 [2024-12-13 12:16:46.924327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.643 [2024-12-13 12:16:46.925698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.643 [2024-12-13 12:16:46.925824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.643 [2024-12-13 12:16:46.925868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.643 [2024-12-13 12:16:46.925869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 [2024-12-13 12:16:47.053940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 Malloc1 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.643 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 [2024-12-13 12:16:47.195341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:19.644 { 00:11:19.644 "name": "Malloc1", 00:11:19.644 "aliases": [ 00:11:19.644 "74695039-2acb-4140-859c-830a8a4b2186" 00:11:19.644 ], 00:11:19.644 "product_name": "Malloc disk", 00:11:19.644 "block_size": 512, 00:11:19.644 "num_blocks": 1048576, 00:11:19.644 "uuid": "74695039-2acb-4140-859c-830a8a4b2186", 00:11:19.644 "assigned_rate_limits": { 00:11:19.644 "rw_ios_per_sec": 0, 00:11:19.644 "rw_mbytes_per_sec": 0, 00:11:19.644 "r_mbytes_per_sec": 0, 00:11:19.644 "w_mbytes_per_sec": 0 00:11:19.644 }, 00:11:19.644 "claimed": true, 00:11:19.644 "claim_type": "exclusive_write", 00:11:19.644 "zoned": false, 00:11:19.644 "supported_io_types": { 00:11:19.644 "read": true, 00:11:19.644 "write": true, 00:11:19.644 "unmap": true, 00:11:19.644 "flush": true, 00:11:19.644 "reset": true, 00:11:19.644 "nvme_admin": false, 00:11:19.644 "nvme_io": false, 00:11:19.644 "nvme_io_md": false, 00:11:19.644 "write_zeroes": true, 00:11:19.644 "zcopy": true, 00:11:19.644 "get_zone_info": false, 00:11:19.644 "zone_management": false, 00:11:19.644 "zone_append": false, 00:11:19.644 "compare": false, 00:11:19.644 "compare_and_write": false, 00:11:19.644 "abort": true, 00:11:19.644 "seek_hole": false, 00:11:19.644 "seek_data": false, 00:11:19.644 "copy": true, 00:11:19.644 "nvme_iov_md": false 00:11:19.644 }, 00:11:19.644 "memory_domains": [ 00:11:19.644 { 00:11:19.644 "dma_device_id": "system", 00:11:19.644 "dma_device_type": 1 00:11:19.644 }, 00:11:19.644 { 00:11:19.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.644 "dma_device_type": 2 00:11:19.644 } 00:11:19.644 ], 00:11:19.644 "driver_specific": {} 00:11:19.644 } 00:11:19.644 ]' 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:19.644 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.024 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.024 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.024 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.024 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.024 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:22.931 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:22.932 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:22.932 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.500 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:23.500 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.438 ************************************ 00:11:24.438 START TEST filesystem_ext4 00:11:24.438 ************************************ 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:24.438 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:24.438 mke2fs 1.47.0 (5-Feb-2023) 00:11:24.697 Discarding device blocks: 0/522240 done 00:11:24.698 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:24.698 Filesystem UUID: 4d896282-c132-469a-9f27-80658c0606b5 00:11:24.698 Superblock backups stored on blocks: 00:11:24.698 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:24.698 00:11:24.698 Allocating group tables: 0/64 done 00:11:24.698 Writing inode tables: 0/64 done 00:11:24.698 Creating journal (8192 blocks): done 00:11:24.698 Writing superblocks and filesystem accounting information: 0/64 done 00:11:24.698 00:11:24.698 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:24.698 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 203024 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.973 00:11:29.973 real 0m5.587s 00:11:29.973 user 0m0.028s 00:11:29.973 sys 0m0.116s 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.973 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:29.973 ************************************ 00:11:29.973 END TEST filesystem_ext4 00:11:29.973 ************************************ 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.233 ************************************ 00:11:30.233 START TEST filesystem_btrfs 00:11:30.233 ************************************ 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:30.233 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:30.801 btrfs-progs v6.8.1 00:11:30.801 See https://btrfs.readthedocs.io for more information. 00:11:30.801 00:11:30.801 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:30.801 NOTE: several default settings have changed in version 5.15, please make sure 00:11:30.801 this does not affect your deployments: 00:11:30.801 - DUP for metadata (-m dup) 00:11:30.801 - enabled no-holes (-O no-holes) 00:11:30.801 - enabled free-space-tree (-R free-space-tree) 00:11:30.801 00:11:30.801 Label: (null) 00:11:30.801 UUID: f7b33ef5-1df4-46d0-a6da-1205b6a19d0c 00:11:30.801 Node size: 16384 00:11:30.801 Sector size: 4096 (CPU page size: 4096) 00:11:30.801 Filesystem size: 510.00MiB 00:11:30.801 Block group profiles: 00:11:30.801 Data: single 8.00MiB 00:11:30.801 Metadata: DUP 32.00MiB 00:11:30.801 System: DUP 8.00MiB 00:11:30.801 SSD detected: yes 00:11:30.801 Zoned device: no 00:11:30.801 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:30.801 Checksum: crc32c 00:11:30.801 Number of devices: 1 00:11:30.801 Devices: 00:11:30.801 ID SIZE PATH 00:11:30.801 1 510.00MiB /dev/nvme0n1p1 00:11:30.801 00:11:30.801 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:30.801 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 203024 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.061 00:11:31.061 real 0m0.867s 00:11:31.061 user 0m0.021s 00:11:31.061 sys 0m0.166s 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.061 ************************************ 00:11:31.061 END TEST filesystem_btrfs 00:11:31.061 ************************************ 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.061 ************************************ 00:11:31.061 START TEST filesystem_xfs 00:11:31.061 ************************************ 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:31.061 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:31.320 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:31.320 = sectsz=512 attr=2, projid32bit=1 00:11:31.320 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:31.320 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:31.320 data = bsize=4096 blocks=130560, imaxpct=25 00:11:31.320 = sunit=0 swidth=0 blks 00:11:31.320 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:31.320 log =internal log bsize=4096 blocks=16384, version=2 00:11:31.320 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:31.320 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:31.889 Discarding blocks...Done. 00:11:31.889 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:31.889 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.176 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.176 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:35.176 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.176 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:35.176 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:35.176 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.177 00:11:35.177 real 0m3.614s 00:11:35.177 user 0m0.028s 00:11:35.177 sys 0m0.117s 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:35.177 ************************************ 00:11:35.177 END TEST filesystem_xfs 00:11:35.177 ************************************ 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 203024 ']' 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203024' 00:11:35.177 killing process with pid 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 203024 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:35.177 00:11:35.177 real 0m16.100s 00:11:35.177 user 1m3.385s 00:11:35.177 sys 0m1.507s 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.177 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.177 ************************************ 00:11:35.177 END TEST nvmf_filesystem_no_in_capsule 00:11:35.177 ************************************ 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.436 ************************************ 00:11:35.436 START TEST nvmf_filesystem_in_capsule 00:11:35.436 ************************************ 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=205933 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 205933 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 205933 ']' 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.436 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.436 [2024-12-13 12:17:02.992242] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:35.436 [2024-12-13 12:17:02.992281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.436 [2024-12-13 12:17:03.066806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.436 [2024-12-13 12:17:03.089683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.436 [2024-12-13 12:17:03.089722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.436 [2024-12-13 12:17:03.089731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.436 [2024-12-13 12:17:03.089738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.436 [2024-12-13 12:17:03.089760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.436 [2024-12-13 12:17:03.091107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.436 [2024-12-13 12:17:03.091220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.436 [2024-12-13 12:17:03.091326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.436 [2024-12-13 12:17:03.091327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.695 [2024-12-13 12:17:03.219691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.695 Malloc1 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.695 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.696 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.696 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.696 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.696 [2024-12-13 12:17:03.393952] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:35.955 { 00:11:35.955 "name": "Malloc1", 00:11:35.955 "aliases": [ 00:11:35.955 "53713a39-df69-412c-97e1-b05790eada53" 00:11:35.955 ], 00:11:35.955 "product_name": "Malloc disk", 00:11:35.955 "block_size": 512, 00:11:35.955 "num_blocks": 1048576, 00:11:35.955 "uuid": "53713a39-df69-412c-97e1-b05790eada53", 00:11:35.955 "assigned_rate_limits": { 00:11:35.955 "rw_ios_per_sec": 0, 00:11:35.955 "rw_mbytes_per_sec": 0, 00:11:35.955 "r_mbytes_per_sec": 0, 00:11:35.955 "w_mbytes_per_sec": 0 00:11:35.955 }, 00:11:35.955 "claimed": true, 00:11:35.955 "claim_type": "exclusive_write", 00:11:35.955 "zoned": false, 00:11:35.955 "supported_io_types": { 00:11:35.955 "read": true, 00:11:35.955 "write": true, 00:11:35.955 "unmap": true, 00:11:35.955 "flush": true, 00:11:35.955 "reset": true, 00:11:35.955 "nvme_admin": false, 00:11:35.955 "nvme_io": false, 00:11:35.955 "nvme_io_md": false, 00:11:35.955 "write_zeroes": true, 00:11:35.955 "zcopy": true, 00:11:35.955 "get_zone_info": false, 00:11:35.955 "zone_management": false, 00:11:35.955 "zone_append": false, 00:11:35.955 "compare": false, 00:11:35.955 "compare_and_write": false, 00:11:35.955 "abort": true, 00:11:35.955 "seek_hole": false, 00:11:35.955 "seek_data": false, 00:11:35.955 "copy": true, 00:11:35.955 "nvme_iov_md": false 00:11:35.955 }, 00:11:35.955 "memory_domains": [ 00:11:35.955 { 00:11:35.955 "dma_device_id": "system", 00:11:35.955 "dma_device_type": 1 00:11:35.955 }, 00:11:35.955 { 00:11:35.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.955 "dma_device_type": 2 00:11:35.955 } 00:11:35.955 ], 00:11:35.955 "driver_specific": {} 00:11:35.955 } 00:11:35.955 ]' 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.955 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.332 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.332 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.332 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.332 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.332 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:39.236 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:39.803 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.182 ************************************ 00:11:41.182 START TEST filesystem_in_capsule_ext4 00:11:41.182 ************************************ 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:41.182 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.182 mke2fs 1.47.0 (5-Feb-2023) 00:11:41.182 Discarding device blocks: 0/522240 done 00:11:41.182 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:41.182 Filesystem UUID: 59251cf6-4af8-4952-9321-e754f151422c 00:11:41.182 Superblock backups stored on blocks: 00:11:41.182 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:41.182 00:11:41.182 Allocating group tables: 0/64 done 00:11:41.182 Writing inode tables: 0/64 done 00:11:43.088 Creating journal (8192 blocks): done 00:11:43.347 Writing superblocks and filesystem accounting information: 0/64 done 00:11:43.347 00:11:43.347 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:43.347 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 205933 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.918 00:11:49.918 real 0m8.004s 00:11:49.918 user 0m0.042s 00:11:49.918 sys 0m0.055s 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.918 ************************************ 00:11:49.918 END TEST filesystem_in_capsule_ext4 00:11:49.918 ************************************ 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.918 ************************************ 00:11:49.918 START TEST filesystem_in_capsule_btrfs 00:11:49.918 ************************************ 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.918 btrfs-progs v6.8.1 00:11:49.918 See https://btrfs.readthedocs.io for more information. 00:11:49.918 00:11:49.918 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.918 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.918 this does not affect your deployments: 00:11:49.918 - DUP for metadata (-m dup) 00:11:49.918 - enabled no-holes (-O no-holes) 00:11:49.918 - enabled free-space-tree (-R free-space-tree) 00:11:49.918 00:11:49.918 Label: (null) 00:11:49.918 UUID: 1132b54a-d286-4483-8739-4225b367196b 00:11:49.918 Node size: 16384 00:11:49.918 Sector size: 4096 (CPU page size: 4096) 00:11:49.918 Filesystem size: 510.00MiB 00:11:49.918 Block group profiles: 00:11:49.918 Data: single 8.00MiB 00:11:49.918 Metadata: DUP 32.00MiB 00:11:49.918 System: DUP 8.00MiB 00:11:49.918 SSD detected: yes 00:11:49.918 Zoned device: no 00:11:49.918 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.918 Checksum: crc32c 00:11:49.918 Number of devices: 1 00:11:49.918 Devices: 00:11:49.918 ID SIZE PATH 00:11:49.918 1 510.00MiB /dev/nvme0n1p1 00:11:49.918 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:49.918 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 205933 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.918 00:11:49.918 real 0m0.626s 00:11:49.918 user 0m0.029s 00:11:49.918 sys 0m0.112s 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.918 ************************************ 00:11:49.918 END TEST filesystem_in_capsule_btrfs 00:11:49.918 ************************************ 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.918 ************************************ 00:11:49.918 START TEST filesystem_in_capsule_xfs 00:11:49.918 ************************************ 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:49.918 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:49.918 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:49.918 = sectsz=512 attr=2, projid32bit=1 00:11:49.918 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:49.918 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:49.918 data = bsize=4096 blocks=130560, imaxpct=25 00:11:49.918 = sunit=0 swidth=0 blks 00:11:49.918 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:49.918 log =internal log bsize=4096 blocks=16384, version=2 00:11:49.919 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:49.919 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:50.485 Discarding blocks...Done. 00:11:50.485 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:50.485 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.389 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.389 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:52.389 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.389 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:52.389 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 205933 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.390 00:11:52.390 real 0m2.643s 00:11:52.390 user 0m0.018s 00:11:52.390 sys 0m0.079s 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.390 ************************************ 00:11:52.390 END TEST filesystem_in_capsule_xfs 00:11:52.390 ************************************ 00:11:52.390 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:52.649 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:52.649 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 205933 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 205933 ']' 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 205933 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205933 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205933' 00:11:52.908 killing process with pid 205933 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 205933 00:11:52.908 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 205933 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.168 00:11:53.168 real 0m17.857s 00:11:53.168 user 1m10.349s 00:11:53.168 sys 0m1.408s 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.168 ************************************ 00:11:53.168 END TEST nvmf_filesystem_in_capsule 00:11:53.168 ************************************ 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.168 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.168 rmmod nvme_tcp 00:11:53.168 rmmod nvme_fabrics 00:11:53.428 rmmod nvme_keyring 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.428 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.335 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.335 00:11:55.335 real 0m42.696s 00:11:55.335 user 2m15.827s 00:11:55.335 sys 0m7.558s 00:11:55.335 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.335 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:55.335 ************************************ 00:11:55.335 END TEST nvmf_filesystem 00:11:55.335 ************************************ 00:11:55.335 12:17:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:55.335 12:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.335 12:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.335 12:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.595 ************************************ 00:11:55.595 START TEST nvmf_target_discovery 00:11:55.595 ************************************ 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:55.595 * Looking for test storage... 00:11:55.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.595 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.596 --rc genhtml_branch_coverage=1 00:11:55.596 --rc genhtml_function_coverage=1 00:11:55.596 --rc genhtml_legend=1 00:11:55.596 --rc geninfo_all_blocks=1 00:11:55.596 --rc geninfo_unexecuted_blocks=1 00:11:55.596 00:11:55.596 ' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.596 --rc genhtml_branch_coverage=1 00:11:55.596 --rc genhtml_function_coverage=1 00:11:55.596 --rc genhtml_legend=1 00:11:55.596 --rc geninfo_all_blocks=1 00:11:55.596 --rc geninfo_unexecuted_blocks=1 00:11:55.596 00:11:55.596 ' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.596 --rc genhtml_branch_coverage=1 00:11:55.596 --rc genhtml_function_coverage=1 00:11:55.596 --rc genhtml_legend=1 00:11:55.596 --rc geninfo_all_blocks=1 00:11:55.596 --rc geninfo_unexecuted_blocks=1 00:11:55.596 00:11:55.596 ' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.596 --rc genhtml_branch_coverage=1 00:11:55.596 --rc genhtml_function_coverage=1 00:11:55.596 --rc genhtml_legend=1 00:11:55.596 --rc geninfo_all_blocks=1 00:11:55.596 --rc geninfo_unexecuted_blocks=1 00:11:55.596 00:11:55.596 ' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.596 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:02.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:02.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:02.170 Found net devices under 0000:af:00.0: cvl_0_0 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:02.170 Found net devices under 0000:af:00.1: cvl_0_1 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:02.170 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.171 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:02.171 00:12:02.171 --- 10.0.0.2 ping statistics --- 00:12:02.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.171 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:12:02.171 00:12:02.171 --- 10.0.0.1 ping statistics --- 00:12:02.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.171 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=212536 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 212536 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 212536 ']' 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 [2024-12-13 12:17:29.245931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:02.171 [2024-12-13 12:17:29.245986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.171 [2024-12-13 12:17:29.324777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.171 [2024-12-13 12:17:29.348073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.171 [2024-12-13 12:17:29.348109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.171 [2024-12-13 12:17:29.348116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.171 [2024-12-13 12:17:29.348122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.171 [2024-12-13 12:17:29.348127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.171 [2024-12-13 12:17:29.349599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.171 [2024-12-13 12:17:29.349707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.171 [2024-12-13 12:17:29.349802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.171 [2024-12-13 12:17:29.349802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 [2024-12-13 12:17:29.482597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 Null1 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 [2024-12-13 12:17:29.555916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.171 Null2 00:12:02.171 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 Null3 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 Null4 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.172 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:02.433 00:12:02.433 Discovery Log Number of Records 6, Generation counter 6 00:12:02.433 =====Discovery Log Entry 0====== 00:12:02.433 trtype: tcp 00:12:02.433 adrfam: ipv4 00:12:02.433 subtype: current discovery subsystem 00:12:02.433 treq: not required 00:12:02.433 portid: 0 00:12:02.433 trsvcid: 4420 00:12:02.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:02.433 traddr: 10.0.0.2 00:12:02.433 eflags: explicit discovery connections, duplicate discovery information 00:12:02.433 sectype: none 00:12:02.433 =====Discovery Log Entry 1====== 00:12:02.433 trtype: tcp 00:12:02.433 adrfam: ipv4 00:12:02.433 subtype: nvme subsystem 00:12:02.433 treq: not required 00:12:02.433 portid: 0 00:12:02.433 trsvcid: 4420 00:12:02.433 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:02.433 traddr: 10.0.0.2 00:12:02.433 eflags: none 00:12:02.433 sectype: none 00:12:02.433 =====Discovery Log Entry 2====== 00:12:02.433 trtype: tcp 00:12:02.433 adrfam: ipv4 00:12:02.433 subtype: nvme subsystem 00:12:02.433 treq: not required 00:12:02.433 portid: 0 00:12:02.433 trsvcid: 4420 00:12:02.433 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:02.433 traddr: 10.0.0.2 00:12:02.433 eflags: none 00:12:02.433 sectype: none 00:12:02.433 =====Discovery Log Entry 3====== 00:12:02.433 trtype: tcp 00:12:02.433 adrfam: ipv4 00:12:02.433 subtype: nvme subsystem 00:12:02.433 treq: not required 00:12:02.433 portid: 0 00:12:02.433 trsvcid: 4420 00:12:02.433 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:02.433 traddr: 10.0.0.2 00:12:02.433 eflags: none 00:12:02.433 sectype: none 00:12:02.433 =====Discovery Log Entry 4====== 00:12:02.433 trtype: tcp 00:12:02.433 adrfam: ipv4 00:12:02.433 subtype: nvme subsystem 00:12:02.433 treq: not required 00:12:02.433 portid: 0 00:12:02.433 trsvcid: 4420 00:12:02.433 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:02.433 traddr: 10.0.0.2 00:12:02.433 eflags: none 00:12:02.433 sectype: none 00:12:02.433 =====Discovery Log Entry 5====== 00:12:02.433 trtype: tcp 00:12:02.433 adrfam: ipv4 00:12:02.433 subtype: discovery subsystem referral 00:12:02.433 treq: not required 00:12:02.433 portid: 0 00:12:02.433 trsvcid: 4430 00:12:02.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:02.433 traddr: 10.0.0.2 00:12:02.433 eflags: none 00:12:02.433 sectype: none 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:02.433 Perform nvmf subsystem discovery via RPC 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.433 [ 00:12:02.433 { 00:12:02.433 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:02.433 "subtype": "Discovery", 00:12:02.433 "listen_addresses": [ 00:12:02.433 { 00:12:02.433 "trtype": "TCP", 00:12:02.433 "adrfam": "IPv4", 00:12:02.433 "traddr": "10.0.0.2", 00:12:02.433 "trsvcid": "4420" 00:12:02.433 } 00:12:02.433 ], 00:12:02.433 "allow_any_host": true, 00:12:02.433 "hosts": [] 00:12:02.433 }, 00:12:02.433 { 00:12:02.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:02.433 "subtype": "NVMe", 00:12:02.433 "listen_addresses": [ 00:12:02.433 { 00:12:02.433 "trtype": "TCP", 00:12:02.433 "adrfam": "IPv4", 00:12:02.433 "traddr": "10.0.0.2", 00:12:02.433 "trsvcid": "4420" 00:12:02.433 } 00:12:02.433 ], 00:12:02.433 "allow_any_host": true, 00:12:02.433 "hosts": [], 00:12:02.433 "serial_number": "SPDK00000000000001", 00:12:02.433 "model_number": "SPDK bdev Controller", 00:12:02.433 "max_namespaces": 32, 00:12:02.433 "min_cntlid": 1, 00:12:02.433 "max_cntlid": 65519, 00:12:02.433 "namespaces": [ 00:12:02.433 { 00:12:02.433 "nsid": 1, 00:12:02.433 "bdev_name": "Null1", 00:12:02.433 "name": "Null1", 00:12:02.433 "nguid": "4D0526C50D0845C6ACAB7F4F86D97A7F", 00:12:02.433 "uuid": "4d0526c5-0d08-45c6-acab-7f4f86d97a7f" 00:12:02.433 } 00:12:02.433 ] 00:12:02.433 }, 00:12:02.433 { 00:12:02.433 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:02.433 "subtype": "NVMe", 00:12:02.433 "listen_addresses": [ 00:12:02.433 { 00:12:02.433 "trtype": "TCP", 00:12:02.433 "adrfam": "IPv4", 00:12:02.433 "traddr": "10.0.0.2", 00:12:02.433 "trsvcid": "4420" 00:12:02.433 } 00:12:02.433 ], 00:12:02.433 "allow_any_host": true, 00:12:02.433 "hosts": [], 00:12:02.433 "serial_number": "SPDK00000000000002", 00:12:02.433 "model_number": "SPDK bdev Controller", 00:12:02.433 "max_namespaces": 32, 00:12:02.433 "min_cntlid": 1, 00:12:02.433 "max_cntlid": 65519, 00:12:02.433 "namespaces": [ 00:12:02.433 { 00:12:02.433 "nsid": 1, 00:12:02.433 "bdev_name": "Null2", 00:12:02.433 "name": "Null2", 00:12:02.433 "nguid": "B6F5733ADA6E4A2D96428C4CA5A88C18", 00:12:02.433 "uuid": "b6f5733a-da6e-4a2d-9642-8c4ca5a88c18" 00:12:02.433 } 00:12:02.433 ] 00:12:02.433 }, 00:12:02.433 { 00:12:02.433 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:02.433 "subtype": "NVMe", 00:12:02.433 "listen_addresses": [ 00:12:02.433 { 00:12:02.433 "trtype": "TCP", 00:12:02.433 "adrfam": "IPv4", 00:12:02.433 "traddr": "10.0.0.2", 00:12:02.433 "trsvcid": "4420" 00:12:02.433 } 00:12:02.433 ], 00:12:02.433 "allow_any_host": true, 00:12:02.433 "hosts": [], 00:12:02.433 "serial_number": "SPDK00000000000003", 00:12:02.433 "model_number": "SPDK bdev Controller", 00:12:02.433 "max_namespaces": 32, 00:12:02.433 "min_cntlid": 1, 00:12:02.433 "max_cntlid": 65519, 00:12:02.433 "namespaces": [ 00:12:02.433 { 00:12:02.433 "nsid": 1, 00:12:02.433 "bdev_name": "Null3", 00:12:02.433 "name": "Null3", 00:12:02.433 "nguid": "3546FACFA5774E6781F6EE087725DB3E", 00:12:02.433 "uuid": "3546facf-a577-4e67-81f6-ee087725db3e" 00:12:02.433 } 00:12:02.433 ] 00:12:02.433 }, 00:12:02.433 { 00:12:02.433 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:02.433 "subtype": "NVMe", 00:12:02.433 "listen_addresses": [ 00:12:02.433 { 00:12:02.433 "trtype": "TCP", 00:12:02.433 "adrfam": "IPv4", 00:12:02.433 "traddr": "10.0.0.2", 00:12:02.433 "trsvcid": "4420" 00:12:02.433 } 00:12:02.433 ], 00:12:02.433 "allow_any_host": true, 00:12:02.433 "hosts": [], 00:12:02.433 "serial_number": "SPDK00000000000004", 00:12:02.433 "model_number": "SPDK bdev Controller", 00:12:02.433 "max_namespaces": 32, 00:12:02.433 "min_cntlid": 1, 00:12:02.433 "max_cntlid": 65519, 00:12:02.433 "namespaces": [ 00:12:02.433 { 00:12:02.433 "nsid": 1, 00:12:02.433 "bdev_name": "Null4", 00:12:02.433 "name": "Null4", 00:12:02.433 "nguid": "FA09F7DBA28248359E5743B549BA2F95", 00:12:02.433 "uuid": "fa09f7db-a282-4835-9e57-43b549ba2f95" 00:12:02.433 } 00:12:02.433 ] 00:12:02.433 } 00:12:02.433 ] 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:02.433 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.434 rmmod nvme_tcp 00:12:02.434 rmmod nvme_fabrics 00:12:02.434 rmmod nvme_keyring 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 212536 ']' 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 212536 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 212536 ']' 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 212536 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.434 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212536 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212536' 00:12:02.694 killing process with pid 212536 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 212536 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 212536 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.694 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.235 00:12:05.235 real 0m9.356s 00:12:05.235 user 0m5.730s 00:12:05.235 sys 0m4.840s 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 ************************************ 00:12:05.235 END TEST nvmf_target_discovery 00:12:05.235 ************************************ 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.235 ************************************ 00:12:05.235 START TEST nvmf_referrals 00:12:05.235 ************************************ 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:05.235 * Looking for test storage... 00:12:05.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.235 --rc genhtml_branch_coverage=1 00:12:05.235 --rc genhtml_function_coverage=1 00:12:05.235 --rc genhtml_legend=1 00:12:05.235 --rc geninfo_all_blocks=1 00:12:05.235 --rc geninfo_unexecuted_blocks=1 00:12:05.235 00:12:05.235 ' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.235 --rc genhtml_branch_coverage=1 00:12:05.235 --rc genhtml_function_coverage=1 00:12:05.235 --rc genhtml_legend=1 00:12:05.235 --rc geninfo_all_blocks=1 00:12:05.235 --rc geninfo_unexecuted_blocks=1 00:12:05.235 00:12:05.235 ' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.235 --rc genhtml_branch_coverage=1 00:12:05.235 --rc genhtml_function_coverage=1 00:12:05.235 --rc genhtml_legend=1 00:12:05.235 --rc geninfo_all_blocks=1 00:12:05.235 --rc geninfo_unexecuted_blocks=1 00:12:05.235 00:12:05.235 ' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.235 --rc genhtml_branch_coverage=1 00:12:05.235 --rc genhtml_function_coverage=1 00:12:05.235 --rc genhtml_legend=1 00:12:05.235 --rc geninfo_all_blocks=1 00:12:05.235 --rc geninfo_unexecuted_blocks=1 00:12:05.235 00:12:05.235 ' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.235 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.236 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:11.807 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:11.807 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:11.807 Found net devices under 0000:af:00.0: cvl_0_0 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.807 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:11.808 Found net devices under 0000:af:00.1: cvl_0_1 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:12:11.808 00:12:11.808 --- 10.0.0.2 ping statistics --- 00:12:11.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.808 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:11.808 00:12:11.808 --- 10.0.0.1 ping statistics --- 00:12:11.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.808 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=216144 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 216144 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 216144 ']' 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 [2024-12-13 12:17:38.743478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:11.808 [2024-12-13 12:17:38.743527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.808 [2024-12-13 12:17:38.823440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.808 [2024-12-13 12:17:38.847610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.808 [2024-12-13 12:17:38.847649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.808 [2024-12-13 12:17:38.847657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.808 [2024-12-13 12:17:38.847663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.808 [2024-12-13 12:17:38.847667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.808 [2024-12-13 12:17:38.849032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.808 [2024-12-13 12:17:38.849060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.808 [2024-12-13 12:17:38.849168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.808 [2024-12-13 12:17:38.849169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 [2024-12-13 12:17:38.985107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.808 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 [2024-12-13 12:17:39.014936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.808 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.809 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.068 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:12.068 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:12.068 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:12.068 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.069 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.328 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:12.587 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:12.587 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:12.587 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:12.587 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:12.587 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.587 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.588 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.846 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:13.105 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:13.105 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:13.105 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:13.105 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:13.105 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.105 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.364 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.623 rmmod nvme_tcp 00:12:13.623 rmmod nvme_fabrics 00:12:13.623 rmmod nvme_keyring 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 216144 ']' 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 216144 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 216144 ']' 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 216144 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216144 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216144' 00:12:13.623 killing process with pid 216144 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 216144 00:12:13.623 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 216144 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.883 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.789 00:12:15.789 real 0m10.952s 00:12:15.789 user 0m12.494s 00:12:15.789 sys 0m5.198s 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.789 ************************************ 00:12:15.789 END TEST nvmf_referrals 00:12:15.789 ************************************ 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.789 12:17:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.050 ************************************ 00:12:16.050 START TEST nvmf_connect_disconnect 00:12:16.050 ************************************ 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:16.050 * Looking for test storage... 00:12:16.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.050 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.051 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:22.625 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.625 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:22.626 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:22.626 Found net devices under 0000:af:00.0: cvl_0_0 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:22.626 Found net devices under 0000:af:00.1: cvl_0_1 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:22.626 00:12:22.626 --- 10.0.0.2 ping statistics --- 00:12:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.626 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:22.626 00:12:22.626 --- 10.0.0.1 ping statistics --- 00:12:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.626 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=220123 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 220123 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 220123 ']' 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.626 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.626 [2024-12-13 12:17:49.831229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:22.626 [2024-12-13 12:17:49.831274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.626 [2024-12-13 12:17:49.910690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.626 [2024-12-13 12:17:49.933327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.626 [2024-12-13 12:17:49.933365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.626 [2024-12-13 12:17:49.933374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.626 [2024-12-13 12:17:49.933381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.626 [2024-12-13 12:17:49.933386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.626 [2024-12-13 12:17:49.934824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.626 [2024-12-13 12:17:49.934856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.627 [2024-12-13 12:17:49.934967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.627 [2024-12-13 12:17:49.934968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.627 [2024-12-13 12:17:50.075066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.627 [2024-12-13 12:17:50.133810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:22.627 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:25.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:14.869 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:14.869 rmmod nvme_tcp 00:16:14.869 rmmod nvme_fabrics 00:16:14.869 rmmod nvme_keyring 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 220123 ']' 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 220123 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 220123 ']' 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 220123 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220123 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220123' 00:16:14.869 killing process with pid 220123 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 220123 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 220123 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.869 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.776 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:16.776 00:16:16.776 real 4m0.855s 00:16:16.776 user 15m19.518s 00:16:16.776 sys 0m24.941s 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:16.777 ************************************ 00:16:16.777 END TEST nvmf_connect_disconnect 00:16:16.777 ************************************ 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.777 ************************************ 00:16:16.777 START TEST nvmf_multitarget 00:16:16.777 ************************************ 00:16:16.777 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:17.037 * Looking for test storage... 00:16:17.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:17.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.037 --rc genhtml_branch_coverage=1 00:16:17.037 --rc genhtml_function_coverage=1 00:16:17.037 --rc genhtml_legend=1 00:16:17.037 --rc geninfo_all_blocks=1 00:16:17.037 --rc geninfo_unexecuted_blocks=1 00:16:17.037 00:16:17.037 ' 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:17.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.037 --rc genhtml_branch_coverage=1 00:16:17.037 --rc genhtml_function_coverage=1 00:16:17.037 --rc genhtml_legend=1 00:16:17.037 --rc geninfo_all_blocks=1 00:16:17.037 --rc geninfo_unexecuted_blocks=1 00:16:17.037 00:16:17.037 ' 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:17.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.037 --rc genhtml_branch_coverage=1 00:16:17.037 --rc genhtml_function_coverage=1 00:16:17.037 --rc genhtml_legend=1 00:16:17.037 --rc geninfo_all_blocks=1 00:16:17.037 --rc geninfo_unexecuted_blocks=1 00:16:17.037 00:16:17.037 ' 00:16:17.037 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:17.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.037 --rc genhtml_branch_coverage=1 00:16:17.037 --rc genhtml_function_coverage=1 00:16:17.037 --rc genhtml_legend=1 00:16:17.037 --rc geninfo_all_blocks=1 00:16:17.037 --rc geninfo_unexecuted_blocks=1 00:16:17.038 00:16:17.038 ' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:17.038 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:23.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:23.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:23.612 Found net devices under 0000:af:00.0: cvl_0_0 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:23.612 Found net devices under 0000:af:00.1: cvl_0_1 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.612 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:23.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:16:23.613 00:16:23.613 --- 10.0.0.2 ping statistics --- 00:16:23.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.613 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:16:23.613 00:16:23.613 --- 10.0.0.1 ping statistics --- 00:16:23.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.613 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=263635 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 263635 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 263635 ']' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.613 [2024-12-13 12:21:50.617583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:23.613 [2024-12-13 12:21:50.617625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.613 [2024-12-13 12:21:50.697301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.613 [2024-12-13 12:21:50.720036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.613 [2024-12-13 12:21:50.720074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.613 [2024-12-13 12:21:50.720080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.613 [2024-12-13 12:21:50.720086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.613 [2024-12-13 12:21:50.720091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.613 [2024-12-13 12:21:50.721506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.613 [2024-12-13 12:21:50.721614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.613 [2024-12-13 12:21:50.721723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.613 [2024-12-13 12:21:50.721724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:23.613 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:23.613 "nvmf_tgt_1" 00:16:23.613 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:23.613 "nvmf_tgt_2" 00:16:23.613 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.613 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:23.613 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:23.613 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:23.872 true 00:16:23.872 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:23.872 true 00:16:23.872 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:23.872 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.132 rmmod nvme_tcp 00:16:24.132 rmmod nvme_fabrics 00:16:24.132 rmmod nvme_keyring 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 263635 ']' 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 263635 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 263635 ']' 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 263635 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263635 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263635' 00:16:24.132 killing process with pid 263635 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 263635 00:16:24.132 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 263635 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.392 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.299 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:26.299 00:16:26.299 real 0m9.535s 00:16:26.299 user 0m7.171s 00:16:26.299 sys 0m4.864s 00:16:26.299 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.299 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.299 ************************************ 00:16:26.299 END TEST nvmf_multitarget 00:16:26.299 ************************************ 00:16:26.560 12:21:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:26.560 12:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.560 ************************************ 00:16:26.560 START TEST nvmf_rpc 00:16:26.560 ************************************ 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:26.560 * Looking for test storage... 00:16:26.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.560 --rc genhtml_branch_coverage=1 00:16:26.560 --rc genhtml_function_coverage=1 00:16:26.560 --rc genhtml_legend=1 00:16:26.560 --rc geninfo_all_blocks=1 00:16:26.560 --rc geninfo_unexecuted_blocks=1 00:16:26.560 00:16:26.560 ' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.560 --rc genhtml_branch_coverage=1 00:16:26.560 --rc genhtml_function_coverage=1 00:16:26.560 --rc genhtml_legend=1 00:16:26.560 --rc geninfo_all_blocks=1 00:16:26.560 --rc geninfo_unexecuted_blocks=1 00:16:26.560 00:16:26.560 ' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.560 --rc genhtml_branch_coverage=1 00:16:26.560 --rc genhtml_function_coverage=1 00:16:26.560 --rc genhtml_legend=1 00:16:26.560 --rc geninfo_all_blocks=1 00:16:26.560 --rc geninfo_unexecuted_blocks=1 00:16:26.560 00:16:26.560 ' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.560 --rc genhtml_branch_coverage=1 00:16:26.560 --rc genhtml_function_coverage=1 00:16:26.560 --rc genhtml_legend=1 00:16:26.560 --rc geninfo_all_blocks=1 00:16:26.560 --rc geninfo_unexecuted_blocks=1 00:16:26.560 00:16:26.560 ' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.560 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:26.561 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.137 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:33.138 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:33.138 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:33.138 Found net devices under 0000:af:00.0: cvl_0_0 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:33.138 Found net devices under 0000:af:00.1: cvl_0_1 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:33.138 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:33.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:16:33.138 00:16:33.138 --- 10.0.0.2 ping statistics --- 00:16:33.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.138 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:16:33.138 00:16:33.138 --- 10.0.0.1 ping statistics --- 00:16:33.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.138 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=267363 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 267363 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 267363 ']' 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.138 [2024-12-13 12:22:00.163527] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:33.138 [2024-12-13 12:22:00.163572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.138 [2024-12-13 12:22:00.242024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.138 [2024-12-13 12:22:00.265005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.138 [2024-12-13 12:22:00.265045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.138 [2024-12-13 12:22:00.265052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.138 [2024-12-13 12:22:00.265058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.138 [2024-12-13 12:22:00.265063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.138 [2024-12-13 12:22:00.266521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.138 [2024-12-13 12:22:00.266630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.138 [2024-12-13 12:22:00.266741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.138 [2024-12-13 12:22:00.266740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.138 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:33.139 "tick_rate": 2100000000, 00:16:33.139 "poll_groups": [ 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_000", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [] 00:16:33.139 }, 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_001", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [] 00:16:33.139 }, 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_002", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [] 00:16:33.139 }, 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_003", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [] 00:16:33.139 } 00:16:33.139 ] 00:16:33.139 }' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 [2024-12-13 12:22:00.503226] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:33.139 "tick_rate": 2100000000, 00:16:33.139 "poll_groups": [ 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_000", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [ 00:16:33.139 { 00:16:33.139 "trtype": "TCP" 00:16:33.139 } 00:16:33.139 ] 00:16:33.139 }, 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_001", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [ 00:16:33.139 { 00:16:33.139 "trtype": "TCP" 00:16:33.139 } 00:16:33.139 ] 00:16:33.139 }, 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_002", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [ 00:16:33.139 { 00:16:33.139 "trtype": "TCP" 00:16:33.139 } 00:16:33.139 ] 00:16:33.139 }, 00:16:33.139 { 00:16:33.139 "name": "nvmf_tgt_poll_group_003", 00:16:33.139 "admin_qpairs": 0, 00:16:33.139 "io_qpairs": 0, 00:16:33.139 "current_admin_qpairs": 0, 00:16:33.139 "current_io_qpairs": 0, 00:16:33.139 "pending_bdev_io": 0, 00:16:33.139 "completed_nvme_io": 0, 00:16:33.139 "transports": [ 00:16:33.139 { 00:16:33.139 "trtype": "TCP" 00:16:33.139 } 00:16:33.139 ] 00:16:33.139 } 00:16:33.139 ] 00:16:33.139 }' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 Malloc1 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 [2024-12-13 12:22:00.684961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:33.139 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:33.140 [2024-12-13 12:22:00.713496] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:33.140 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:33.140 could not add new controller: failed to write to nvme-fabrics device 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.140 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.518 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.518 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.518 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.518 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:34.518 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.424 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.424 [2024-12-13 12:22:04.027412] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:36.424 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:36.424 could not add new controller: failed to write to nvme-fabrics device 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.424 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.425 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.803 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.803 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:37.803 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.803 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:37.803 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.752 [2024-12-13 12:22:07.381981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.752 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.131 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.131 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.131 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.131 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:41.131 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.036 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.036 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.036 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 [2024-12-13 12:22:10.644422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.037 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.416 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.416 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:44.416 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.416 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:44.416 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.322 [2024-12-13 12:22:13.949491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.322 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.701 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.701 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.701 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.701 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.702 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.607 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.867 [2024-12-13 12:22:17.307385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.867 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.805 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.805 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:50.805 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.805 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:50.805 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 [2024-12-13 12:22:20.608606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.343 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.281 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.281 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:54.281 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.281 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:54.281 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:56.187 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.188 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.188 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.188 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.188 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.188 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.188 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 [2024-12-13 12:22:23.919816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 [2024-12-13 12:22:23.971941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 [2024-12-13 12:22:24.020077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.448 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 [2024-12-13 12:22:24.068241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 [2024-12-13 12:22:24.120434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.449 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.708 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:56.708 "tick_rate": 2100000000, 00:16:56.708 "poll_groups": [ 00:16:56.708 { 00:16:56.708 "name": "nvmf_tgt_poll_group_000", 00:16:56.708 "admin_qpairs": 2, 00:16:56.708 "io_qpairs": 168, 00:16:56.709 "current_admin_qpairs": 0, 00:16:56.709 "current_io_qpairs": 0, 00:16:56.709 "pending_bdev_io": 0, 00:16:56.709 "completed_nvme_io": 220, 00:16:56.709 "transports": [ 00:16:56.709 { 00:16:56.709 "trtype": "TCP" 00:16:56.709 } 00:16:56.709 ] 00:16:56.709 }, 00:16:56.709 { 00:16:56.709 "name": "nvmf_tgt_poll_group_001", 00:16:56.709 "admin_qpairs": 2, 00:16:56.709 "io_qpairs": 168, 00:16:56.709 "current_admin_qpairs": 0, 00:16:56.709 "current_io_qpairs": 0, 00:16:56.709 "pending_bdev_io": 0, 00:16:56.709 "completed_nvme_io": 266, 00:16:56.709 "transports": [ 00:16:56.709 { 00:16:56.709 "trtype": "TCP" 00:16:56.709 } 00:16:56.709 ] 00:16:56.709 }, 00:16:56.709 { 00:16:56.709 "name": "nvmf_tgt_poll_group_002", 00:16:56.709 "admin_qpairs": 1, 00:16:56.709 "io_qpairs": 168, 00:16:56.709 "current_admin_qpairs": 0, 00:16:56.709 "current_io_qpairs": 0, 00:16:56.709 "pending_bdev_io": 0, 00:16:56.709 "completed_nvme_io": 185, 00:16:56.709 "transports": [ 00:16:56.709 { 00:16:56.709 "trtype": "TCP" 00:16:56.709 } 00:16:56.709 ] 00:16:56.709 }, 00:16:56.709 { 00:16:56.709 "name": "nvmf_tgt_poll_group_003", 00:16:56.709 "admin_qpairs": 2, 00:16:56.709 "io_qpairs": 168, 00:16:56.709 "current_admin_qpairs": 0, 00:16:56.709 "current_io_qpairs": 0, 00:16:56.709 "pending_bdev_io": 0, 00:16:56.709 "completed_nvme_io": 351, 00:16:56.709 "transports": [ 00:16:56.709 { 00:16:56.709 "trtype": "TCP" 00:16:56.709 } 00:16:56.709 ] 00:16:56.709 } 00:16:56.709 ] 00:16:56.709 }' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.709 rmmod nvme_tcp 00:16:56.709 rmmod nvme_fabrics 00:16:56.709 rmmod nvme_keyring 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 267363 ']' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 267363 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 267363 ']' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 267363 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267363 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267363' 00:16:56.709 killing process with pid 267363 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 267363 00:16:56.709 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 267363 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.969 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.518 00:16:59.518 real 0m32.601s 00:16:59.518 user 1m38.587s 00:16:59.518 sys 0m6.372s 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.518 ************************************ 00:16:59.518 END TEST nvmf_rpc 00:16:59.518 ************************************ 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.518 ************************************ 00:16:59.518 START TEST nvmf_invalid 00:16:59.518 ************************************ 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:59.518 * Looking for test storage... 00:16:59.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.518 --rc genhtml_branch_coverage=1 00:16:59.518 --rc genhtml_function_coverage=1 00:16:59.518 --rc genhtml_legend=1 00:16:59.518 --rc geninfo_all_blocks=1 00:16:59.518 --rc geninfo_unexecuted_blocks=1 00:16:59.518 00:16:59.518 ' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.518 --rc genhtml_branch_coverage=1 00:16:59.518 --rc genhtml_function_coverage=1 00:16:59.518 --rc genhtml_legend=1 00:16:59.518 --rc geninfo_all_blocks=1 00:16:59.518 --rc geninfo_unexecuted_blocks=1 00:16:59.518 00:16:59.518 ' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.518 --rc genhtml_branch_coverage=1 00:16:59.518 --rc genhtml_function_coverage=1 00:16:59.518 --rc genhtml_legend=1 00:16:59.518 --rc geninfo_all_blocks=1 00:16:59.518 --rc geninfo_unexecuted_blocks=1 00:16:59.518 00:16:59.518 ' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.518 --rc genhtml_branch_coverage=1 00:16:59.518 --rc genhtml_function_coverage=1 00:16:59.518 --rc genhtml_legend=1 00:16:59.518 --rc geninfo_all_blocks=1 00:16:59.518 --rc geninfo_unexecuted_blocks=1 00:16:59.518 00:16:59.518 ' 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.518 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.519 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:06.097 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:06.097 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:06.097 Found net devices under 0000:af:00.0: cvl_0_0 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:06.097 Found net devices under 0000:af:00.1: cvl_0_1 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.097 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:17:06.098 00:17:06.098 --- 10.0.0.2 ping statistics --- 00:17:06.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.098 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:17:06.098 00:17:06.098 --- 10.0.0.1 ping statistics --- 00:17:06.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.098 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=274897 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 274897 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 274897 ']' 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.098 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 [2024-12-13 12:22:32.898878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:06.098 [2024-12-13 12:22:32.898922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.098 [2024-12-13 12:22:32.973186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.098 [2024-12-13 12:22:32.995277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.098 [2024-12-13 12:22:32.995318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.098 [2024-12-13 12:22:32.995325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.098 [2024-12-13 12:22:32.995330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.098 [2024-12-13 12:22:32.995335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.098 [2024-12-13 12:22:32.996614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.098 [2024-12-13 12:22:32.996727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.098 [2024-12-13 12:22:32.996823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.098 [2024-12-13 12:22:32.996824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22651 00:17:06.098 [2024-12-13 12:22:33.297582] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:06.098 { 00:17:06.098 "nqn": "nqn.2016-06.io.spdk:cnode22651", 00:17:06.098 "tgt_name": "foobar", 00:17:06.098 "method": "nvmf_create_subsystem", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.098 Got JSON-RPC error response 00:17:06.098 response: 00:17:06.098 { 00:17:06.098 "code": -32603, 00:17:06.098 "message": "Unable to find target foobar" 00:17:06.098 }' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:06.098 { 00:17:06.098 "nqn": "nqn.2016-06.io.spdk:cnode22651", 00:17:06.098 "tgt_name": "foobar", 00:17:06.098 "method": "nvmf_create_subsystem", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.098 Got JSON-RPC error response 00:17:06.098 response: 00:17:06.098 { 00:17:06.098 "code": -32603, 00:17:06.098 "message": "Unable to find target foobar" 00:17:06.098 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5143 00:17:06.098 [2024-12-13 12:22:33.498265] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5143: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:06.098 { 00:17:06.098 "nqn": "nqn.2016-06.io.spdk:cnode5143", 00:17:06.098 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:06.098 "method": "nvmf_create_subsystem", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.098 Got JSON-RPC error response 00:17:06.098 response: 00:17:06.098 { 00:17:06.098 "code": -32602, 00:17:06.098 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:06.098 }' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:06.098 { 00:17:06.098 "nqn": "nqn.2016-06.io.spdk:cnode5143", 00:17:06.098 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:06.098 "method": "nvmf_create_subsystem", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.098 Got JSON-RPC error response 00:17:06.098 response: 00:17:06.098 { 00:17:06.098 "code": -32602, 00:17:06.098 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:06.098 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24487 00:17:06.098 [2024-12-13 12:22:33.698930] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24487: invalid model number 'SPDK_Controller' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:06.098 { 00:17:06.098 "nqn": "nqn.2016-06.io.spdk:cnode24487", 00:17:06.098 "model_number": "SPDK_Controller\u001f", 00:17:06.098 "method": "nvmf_create_subsystem", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.098 Got JSON-RPC error response 00:17:06.098 response: 00:17:06.098 { 00:17:06.098 "code": -32602, 00:17:06.098 "message": "Invalid MN SPDK_Controller\u001f" 00:17:06.098 }' 00:17:06.098 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:06.098 { 00:17:06.098 "nqn": "nqn.2016-06.io.spdk:cnode24487", 00:17:06.098 "model_number": "SPDK_Controller\u001f", 00:17:06.098 "method": "nvmf_create_subsystem", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.099 Got JSON-RPC error response 00:17:06.099 response: 00:17:06.099 { 00:17:06.099 "code": -32602, 00:17:06.099 "message": "Invalid MN SPDK_Controller\u001f" 00:17:06.099 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:06.099 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '.:1Ab(_q7wGa=|Ksd-=(\' 00:17:06.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '.:1Ab(_q7wGa=|Ksd-=(\' nqn.2016-06.io.spdk:cnode17148 00:17:06.360 [2024-12-13 12:22:34.040063] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17148: invalid serial number '.:1Ab(_q7wGa=|Ksd-=(\' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:06.620 { 00:17:06.620 "nqn": "nqn.2016-06.io.spdk:cnode17148", 00:17:06.620 "serial_number": ".:1Ab(_q7wGa=|Ksd-=(\\", 00:17:06.620 "method": "nvmf_create_subsystem", 00:17:06.620 "req_id": 1 00:17:06.620 } 00:17:06.620 Got JSON-RPC error response 00:17:06.620 response: 00:17:06.620 { 00:17:06.620 "code": -32602, 00:17:06.620 "message": "Invalid SN .:1Ab(_q7wGa=|Ksd-=(\\" 00:17:06.620 }' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:06.620 { 00:17:06.620 "nqn": "nqn.2016-06.io.spdk:cnode17148", 00:17:06.620 "serial_number": ".:1Ab(_q7wGa=|Ksd-=(\\", 00:17:06.620 "method": "nvmf_create_subsystem", 00:17:06.620 "req_id": 1 00:17:06.620 } 00:17:06.620 Got JSON-RPC error response 00:17:06.620 response: 00:17:06.620 { 00:17:06.620 "code": -32602, 00:17:06.620 "message": "Invalid SN .:1Ab(_q7wGa=|Ksd-=(\\" 00:17:06.620 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.620 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.621 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.622 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.881 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:06.881 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:06.881 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:06.881 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.881 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.881 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '2.By,fK[=E&J@Skr)~JI~}f8'\''v]p5*(q"Pd.O2%az' 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2.By,fK[=E&J@Skr)~JI~}f8'\''v]p5*(q"Pd.O2%az' nqn.2016-06.io.spdk:cnode10659 00:17:06.882 [2024-12-13 12:22:34.513641] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10659: invalid model number '2.By,fK[=E&J@Skr)~JI~}f8'v]p5*(q"Pd.O2%az' 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:06.882 { 00:17:06.882 "nqn": "nqn.2016-06.io.spdk:cnode10659", 00:17:06.882 "model_number": "2.By,fK[=E&J@Skr)~JI~}f8'\''v]p5*(q\"Pd.O2%az", 00:17:06.882 "method": "nvmf_create_subsystem", 00:17:06.882 "req_id": 1 00:17:06.882 } 00:17:06.882 Got JSON-RPC error response 00:17:06.882 response: 00:17:06.882 { 00:17:06.882 "code": -32602, 00:17:06.882 "message": "Invalid MN 2.By,fK[=E&J@Skr)~JI~}f8'\''v]p5*(q\"Pd.O2%az" 00:17:06.882 }' 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:06.882 { 00:17:06.882 "nqn": "nqn.2016-06.io.spdk:cnode10659", 00:17:06.882 "model_number": "2.By,fK[=E&J@Skr)~JI~}f8'v]p5*(q\"Pd.O2%az", 00:17:06.882 "method": "nvmf_create_subsystem", 00:17:06.882 "req_id": 1 00:17:06.882 } 00:17:06.882 Got JSON-RPC error response 00:17:06.882 response: 00:17:06.882 { 00:17:06.882 "code": -32602, 00:17:06.882 "message": "Invalid MN 2.By,fK[=E&J@Skr)~JI~}f8'v]p5*(q\"Pd.O2%az" 00:17:06.882 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:06.882 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:07.141 [2024-12-13 12:22:34.714441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.141 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:07.400 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:07.400 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:07.400 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:07.400 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:07.400 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:07.659 [2024-12-13 12:22:35.117020] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:07.659 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:07.659 { 00:17:07.659 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:07.659 "listen_address": { 00:17:07.659 "trtype": "tcp", 00:17:07.659 "traddr": "", 00:17:07.659 "trsvcid": "4421" 00:17:07.659 }, 00:17:07.659 "method": "nvmf_subsystem_remove_listener", 00:17:07.659 "req_id": 1 00:17:07.659 } 00:17:07.659 Got JSON-RPC error response 00:17:07.659 response: 00:17:07.659 { 00:17:07.659 "code": -32602, 00:17:07.659 "message": "Invalid parameters" 00:17:07.659 }' 00:17:07.659 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:07.659 { 00:17:07.659 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:07.659 "listen_address": { 00:17:07.659 "trtype": "tcp", 00:17:07.659 "traddr": "", 00:17:07.659 "trsvcid": "4421" 00:17:07.659 }, 00:17:07.659 "method": "nvmf_subsystem_remove_listener", 00:17:07.659 "req_id": 1 00:17:07.659 } 00:17:07.659 Got JSON-RPC error response 00:17:07.659 response: 00:17:07.659 { 00:17:07.659 "code": -32602, 00:17:07.659 "message": "Invalid parameters" 00:17:07.659 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:07.659 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1296 -i 0 00:17:07.659 [2024-12-13 12:22:35.313684] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1296: invalid cntlid range [0-65519] 00:17:07.659 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:07.659 { 00:17:07.659 "nqn": "nqn.2016-06.io.spdk:cnode1296", 00:17:07.659 "min_cntlid": 0, 00:17:07.659 "method": "nvmf_create_subsystem", 00:17:07.659 "req_id": 1 00:17:07.660 } 00:17:07.660 Got JSON-RPC error response 00:17:07.660 response: 00:17:07.660 { 00:17:07.660 "code": -32602, 00:17:07.660 "message": "Invalid cntlid range [0-65519]" 00:17:07.660 }' 00:17:07.660 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:07.660 { 00:17:07.660 "nqn": "nqn.2016-06.io.spdk:cnode1296", 00:17:07.660 "min_cntlid": 0, 00:17:07.660 "method": "nvmf_create_subsystem", 00:17:07.660 "req_id": 1 00:17:07.660 } 00:17:07.660 Got JSON-RPC error response 00:17:07.660 response: 00:17:07.660 { 00:17:07.660 "code": -32602, 00:17:07.660 "message": "Invalid cntlid range [0-65519]" 00:17:07.660 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:07.660 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30584 -i 65520 00:17:07.918 [2024-12-13 12:22:35.522359] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30584: invalid cntlid range [65520-65519] 00:17:07.918 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:07.918 { 00:17:07.918 "nqn": "nqn.2016-06.io.spdk:cnode30584", 00:17:07.918 "min_cntlid": 65520, 00:17:07.918 "method": "nvmf_create_subsystem", 00:17:07.918 "req_id": 1 00:17:07.918 } 00:17:07.918 Got JSON-RPC error response 00:17:07.918 response: 00:17:07.918 { 00:17:07.918 "code": -32602, 00:17:07.918 "message": "Invalid cntlid range [65520-65519]" 00:17:07.918 }' 00:17:07.918 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:07.918 { 00:17:07.918 "nqn": "nqn.2016-06.io.spdk:cnode30584", 00:17:07.918 "min_cntlid": 65520, 00:17:07.918 "method": "nvmf_create_subsystem", 00:17:07.918 "req_id": 1 00:17:07.918 } 00:17:07.918 Got JSON-RPC error response 00:17:07.918 response: 00:17:07.918 { 00:17:07.918 "code": -32602, 00:17:07.918 "message": "Invalid cntlid range [65520-65519]" 00:17:07.918 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:07.918 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17709 -I 0 00:17:08.178 [2024-12-13 12:22:35.739083] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17709: invalid cntlid range [1-0] 00:17:08.178 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:08.178 { 00:17:08.178 "nqn": "nqn.2016-06.io.spdk:cnode17709", 00:17:08.178 "max_cntlid": 0, 00:17:08.178 "method": "nvmf_create_subsystem", 00:17:08.178 "req_id": 1 00:17:08.178 } 00:17:08.178 Got JSON-RPC error response 00:17:08.178 response: 00:17:08.178 { 00:17:08.178 "code": -32602, 00:17:08.178 "message": "Invalid cntlid range [1-0]" 00:17:08.178 }' 00:17:08.178 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:08.178 { 00:17:08.178 "nqn": "nqn.2016-06.io.spdk:cnode17709", 00:17:08.178 "max_cntlid": 0, 00:17:08.178 "method": "nvmf_create_subsystem", 00:17:08.178 "req_id": 1 00:17:08.178 } 00:17:08.178 Got JSON-RPC error response 00:17:08.178 response: 00:17:08.178 { 00:17:08.178 "code": -32602, 00:17:08.178 "message": "Invalid cntlid range [1-0]" 00:17:08.178 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:08.178 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4647 -I 65520 00:17:08.437 [2024-12-13 12:22:35.943798] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4647: invalid cntlid range [1-65520] 00:17:08.437 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:08.437 { 00:17:08.437 "nqn": "nqn.2016-06.io.spdk:cnode4647", 00:17:08.437 "max_cntlid": 65520, 00:17:08.437 "method": "nvmf_create_subsystem", 00:17:08.437 "req_id": 1 00:17:08.437 } 00:17:08.437 Got JSON-RPC error response 00:17:08.437 response: 00:17:08.437 { 00:17:08.437 "code": -32602, 00:17:08.437 "message": "Invalid cntlid range [1-65520]" 00:17:08.437 }' 00:17:08.437 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:08.437 { 00:17:08.437 "nqn": "nqn.2016-06.io.spdk:cnode4647", 00:17:08.437 "max_cntlid": 65520, 00:17:08.437 "method": "nvmf_create_subsystem", 00:17:08.437 "req_id": 1 00:17:08.437 } 00:17:08.437 Got JSON-RPC error response 00:17:08.437 response: 00:17:08.437 { 00:17:08.437 "code": -32602, 00:17:08.437 "message": "Invalid cntlid range [1-65520]" 00:17:08.437 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:08.437 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18533 -i 6 -I 5 00:17:08.697 [2024-12-13 12:22:36.148534] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18533: invalid cntlid range [6-5] 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:08.697 { 00:17:08.697 "nqn": "nqn.2016-06.io.spdk:cnode18533", 00:17:08.697 "min_cntlid": 6, 00:17:08.697 "max_cntlid": 5, 00:17:08.697 "method": "nvmf_create_subsystem", 00:17:08.697 "req_id": 1 00:17:08.697 } 00:17:08.697 Got JSON-RPC error response 00:17:08.697 response: 00:17:08.697 { 00:17:08.697 "code": -32602, 00:17:08.697 "message": "Invalid cntlid range [6-5]" 00:17:08.697 }' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:08.697 { 00:17:08.697 "nqn": "nqn.2016-06.io.spdk:cnode18533", 00:17:08.697 "min_cntlid": 6, 00:17:08.697 "max_cntlid": 5, 00:17:08.697 "method": "nvmf_create_subsystem", 00:17:08.697 "req_id": 1 00:17:08.697 } 00:17:08.697 Got JSON-RPC error response 00:17:08.697 response: 00:17:08.697 { 00:17:08.697 "code": -32602, 00:17:08.697 "message": "Invalid cntlid range [6-5]" 00:17:08.697 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:08.697 { 00:17:08.697 "name": "foobar", 00:17:08.697 "method": "nvmf_delete_target", 00:17:08.697 "req_id": 1 00:17:08.697 } 00:17:08.697 Got JSON-RPC error response 00:17:08.697 response: 00:17:08.697 { 00:17:08.697 "code": -32602, 00:17:08.697 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:08.697 }' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:08.697 { 00:17:08.697 "name": "foobar", 00:17:08.697 "method": "nvmf_delete_target", 00:17:08.697 "req_id": 1 00:17:08.697 } 00:17:08.697 Got JSON-RPC error response 00:17:08.697 response: 00:17:08.697 { 00:17:08.697 "code": -32602, 00:17:08.697 "message": "The specified target doesn't exist, cannot delete it." 00:17:08.697 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.697 rmmod nvme_tcp 00:17:08.697 rmmod nvme_fabrics 00:17:08.697 rmmod nvme_keyring 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 274897 ']' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 274897 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 274897 ']' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 274897 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274897 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274897' 00:17:08.697 killing process with pid 274897 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 274897 00:17:08.697 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 274897 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.957 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.495 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.495 00:17:11.495 real 0m11.914s 00:17:11.495 user 0m18.324s 00:17:11.495 sys 0m5.411s 00:17:11.495 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.495 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:11.495 ************************************ 00:17:11.495 END TEST nvmf_invalid 00:17:11.495 ************************************ 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.496 ************************************ 00:17:11.496 START TEST nvmf_connect_stress 00:17:11.496 ************************************ 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:11.496 * Looking for test storage... 00:17:11.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.496 --rc genhtml_branch_coverage=1 00:17:11.496 --rc genhtml_function_coverage=1 00:17:11.496 --rc genhtml_legend=1 00:17:11.496 --rc geninfo_all_blocks=1 00:17:11.496 --rc geninfo_unexecuted_blocks=1 00:17:11.496 00:17:11.496 ' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.496 --rc genhtml_branch_coverage=1 00:17:11.496 --rc genhtml_function_coverage=1 00:17:11.496 --rc genhtml_legend=1 00:17:11.496 --rc geninfo_all_blocks=1 00:17:11.496 --rc geninfo_unexecuted_blocks=1 00:17:11.496 00:17:11.496 ' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.496 --rc genhtml_branch_coverage=1 00:17:11.496 --rc genhtml_function_coverage=1 00:17:11.496 --rc genhtml_legend=1 00:17:11.496 --rc geninfo_all_blocks=1 00:17:11.496 --rc geninfo_unexecuted_blocks=1 00:17:11.496 00:17:11.496 ' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.496 --rc genhtml_branch_coverage=1 00:17:11.496 --rc genhtml_function_coverage=1 00:17:11.496 --rc genhtml_legend=1 00:17:11.496 --rc geninfo_all_blocks=1 00:17:11.496 --rc geninfo_unexecuted_blocks=1 00:17:11.496 00:17:11.496 ' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.496 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.497 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.071 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:18.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:18.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:18.072 Found net devices under 0000:af:00.0: cvl_0_0 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:18.072 Found net devices under 0000:af:00.1: cvl_0_1 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:18.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:17:18.072 00:17:18.072 --- 10.0.0.2 ping statistics --- 00:17:18.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.072 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:18.072 00:17:18.072 --- 10.0.0.1 ping statistics --- 00:17:18.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.072 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=279105 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 279105 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 279105 ']' 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.072 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.073 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.073 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 [2024-12-13 12:22:44.910919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:18.073 [2024-12-13 12:22:44.910958] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.073 [2024-12-13 12:22:44.984451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.073 [2024-12-13 12:22:45.005627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.073 [2024-12-13 12:22:45.005665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.073 [2024-12-13 12:22:45.005671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.073 [2024-12-13 12:22:45.005677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.073 [2024-12-13 12:22:45.005682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.073 [2024-12-13 12:22:45.006926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.073 [2024-12-13 12:22:45.007031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.073 [2024-12-13 12:22:45.007032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 [2024-12-13 12:22:45.146115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 [2024-12-13 12:22:45.166318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 NULL1 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=279127 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.073 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.333 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:18.333 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.333 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.333 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.592 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.592 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:18.592 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.592 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.592 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.159 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.159 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:19.159 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.159 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.159 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.418 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.418 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:19.418 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.418 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.418 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:19.678 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.678 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.937 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.937 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:19.937 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.937 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.937 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.196 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.196 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:20.196 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.196 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.196 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.765 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.765 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:20.765 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.765 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.765 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.024 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.024 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:21.024 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.024 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.024 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.284 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.284 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:21.284 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.284 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.284 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.543 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.543 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:21.543 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.543 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.543 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.802 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.802 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:21.802 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.802 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.802 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.370 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.370 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:22.370 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.370 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.370 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.629 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.629 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:22.629 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.629 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.629 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.888 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.888 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:22.888 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.888 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.888 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.148 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.148 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:23.148 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.148 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.148 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.716 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.716 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:23.716 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.716 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.716 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.975 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.975 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:23.975 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.975 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.975 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.234 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.234 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:24.234 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.234 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.234 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.493 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.493 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:24.493 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.493 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.493 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.752 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.752 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:24.752 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.752 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.752 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.320 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.320 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:25.320 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.320 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.320 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.580 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.580 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:25.580 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.580 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.580 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.839 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.839 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:25.839 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.839 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.839 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.098 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.098 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:26.098 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.098 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.098 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.357 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.357 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:26.357 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.357 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.357 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.926 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.926 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:26.926 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.926 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.926 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.185 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.186 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:27.186 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.186 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.186 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.445 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.445 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:27.445 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.445 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.445 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.704 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.704 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:27.704 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.704 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.704 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.704 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279127 00:17:28.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (279127) - No such process 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 279127 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.272 rmmod nvme_tcp 00:17:28.272 rmmod nvme_fabrics 00:17:28.272 rmmod nvme_keyring 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 279105 ']' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 279105 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 279105 ']' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 279105 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279105 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279105' 00:17:28.272 killing process with pid 279105 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 279105 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 279105 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.812 00:17:30.812 real 0m19.332s 00:17:30.812 user 0m42.317s 00:17:30.812 sys 0m6.766s 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.812 ************************************ 00:17:30.812 END TEST nvmf_connect_stress 00:17:30.812 ************************************ 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.812 ************************************ 00:17:30.812 START TEST nvmf_fused_ordering 00:17:30.812 ************************************ 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:30.812 * Looking for test storage... 00:17:30.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.812 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.813 --rc genhtml_branch_coverage=1 00:17:30.813 --rc genhtml_function_coverage=1 00:17:30.813 --rc genhtml_legend=1 00:17:30.813 --rc geninfo_all_blocks=1 00:17:30.813 --rc geninfo_unexecuted_blocks=1 00:17:30.813 00:17:30.813 ' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.813 --rc genhtml_branch_coverage=1 00:17:30.813 --rc genhtml_function_coverage=1 00:17:30.813 --rc genhtml_legend=1 00:17:30.813 --rc geninfo_all_blocks=1 00:17:30.813 --rc geninfo_unexecuted_blocks=1 00:17:30.813 00:17:30.813 ' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.813 --rc genhtml_branch_coverage=1 00:17:30.813 --rc genhtml_function_coverage=1 00:17:30.813 --rc genhtml_legend=1 00:17:30.813 --rc geninfo_all_blocks=1 00:17:30.813 --rc geninfo_unexecuted_blocks=1 00:17:30.813 00:17:30.813 ' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.813 --rc genhtml_branch_coverage=1 00:17:30.813 --rc genhtml_function_coverage=1 00:17:30.813 --rc genhtml_legend=1 00:17:30.813 --rc geninfo_all_blocks=1 00:17:30.813 --rc geninfo_unexecuted_blocks=1 00:17:30.813 00:17:30.813 ' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.813 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:37.389 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:37.389 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.389 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:37.390 Found net devices under 0000:af:00.0: cvl_0_0 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:37.390 Found net devices under 0000:af:00.1: cvl_0_1 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.390 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:17:37.390 00:17:37.390 --- 10.0.0.2 ping statistics --- 00:17:37.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.390 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:17:37.390 00:17:37.390 --- 10.0.0.1 ping statistics --- 00:17:37.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.390 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=284382 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 284382 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 284382 ']' 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.390 [2024-12-13 12:23:04.256054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:37.390 [2024-12-13 12:23:04.256101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.390 [2024-12-13 12:23:04.332219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.390 [2024-12-13 12:23:04.353827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.390 [2024-12-13 12:23:04.353865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.390 [2024-12-13 12:23:04.353872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.390 [2024-12-13 12:23:04.353878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.390 [2024-12-13 12:23:04.353884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.390 [2024-12-13 12:23:04.354344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.390 [2024-12-13 12:23:04.495983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.390 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 [2024-12-13 12:23:04.512184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 NULL1 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.391 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:37.391 [2024-12-13 12:23:04.565674] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:37.391 [2024-12-13 12:23:04.565717] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284408 ] 00:17:37.391 Attached to nqn.2016-06.io.spdk:cnode1 00:17:37.391 Namespace ID: 1 size: 1GB 00:17:37.391 fused_ordering(0) 00:17:37.391 fused_ordering(1) 00:17:37.391 fused_ordering(2) 00:17:37.391 fused_ordering(3) 00:17:37.391 fused_ordering(4) 00:17:37.391 fused_ordering(5) 00:17:37.391 fused_ordering(6) 00:17:37.391 fused_ordering(7) 00:17:37.391 fused_ordering(8) 00:17:37.391 fused_ordering(9) 00:17:37.391 fused_ordering(10) 00:17:37.391 fused_ordering(11) 00:17:37.391 fused_ordering(12) 00:17:37.391 fused_ordering(13) 00:17:37.391 fused_ordering(14) 00:17:37.391 fused_ordering(15) 00:17:37.391 fused_ordering(16) 00:17:37.391 fused_ordering(17) 00:17:37.391 fused_ordering(18) 00:17:37.391 fused_ordering(19) 00:17:37.391 fused_ordering(20) 00:17:37.391 fused_ordering(21) 00:17:37.391 fused_ordering(22) 00:17:37.391 fused_ordering(23) 00:17:37.391 fused_ordering(24) 00:17:37.391 fused_ordering(25) 00:17:37.391 fused_ordering(26) 00:17:37.391 fused_ordering(27) 00:17:37.391 fused_ordering(28) 00:17:37.391 fused_ordering(29) 00:17:37.391 fused_ordering(30) 00:17:37.391 fused_ordering(31) 00:17:37.391 fused_ordering(32) 00:17:37.391 fused_ordering(33) 00:17:37.391 fused_ordering(34) 00:17:37.391 fused_ordering(35) 00:17:37.391 fused_ordering(36) 00:17:37.391 fused_ordering(37) 00:17:37.391 fused_ordering(38) 00:17:37.391 fused_ordering(39) 00:17:37.391 fused_ordering(40) 00:17:37.391 fused_ordering(41) 00:17:37.391 fused_ordering(42) 00:17:37.391 fused_ordering(43) 00:17:37.391 fused_ordering(44) 00:17:37.391 fused_ordering(45) 00:17:37.391 fused_ordering(46) 00:17:37.391 fused_ordering(47) 00:17:37.391 fused_ordering(48) 00:17:37.391 fused_ordering(49) 00:17:37.391 fused_ordering(50) 00:17:37.391 fused_ordering(51) 00:17:37.391 fused_ordering(52) 00:17:37.391 fused_ordering(53) 00:17:37.391 fused_ordering(54) 00:17:37.391 fused_ordering(55) 00:17:37.391 fused_ordering(56) 00:17:37.391 fused_ordering(57) 00:17:37.391 fused_ordering(58) 00:17:37.391 fused_ordering(59) 00:17:37.391 fused_ordering(60) 00:17:37.391 fused_ordering(61) 00:17:37.391 fused_ordering(62) 00:17:37.391 fused_ordering(63) 00:17:37.391 fused_ordering(64) 00:17:37.391 fused_ordering(65) 00:17:37.391 fused_ordering(66) 00:17:37.391 fused_ordering(67) 00:17:37.391 fused_ordering(68) 00:17:37.391 fused_ordering(69) 00:17:37.391 fused_ordering(70) 00:17:37.391 fused_ordering(71) 00:17:37.391 fused_ordering(72) 00:17:37.391 fused_ordering(73) 00:17:37.391 fused_ordering(74) 00:17:37.391 fused_ordering(75) 00:17:37.391 fused_ordering(76) 00:17:37.391 fused_ordering(77) 00:17:37.391 fused_ordering(78) 00:17:37.391 fused_ordering(79) 00:17:37.391 fused_ordering(80) 00:17:37.391 fused_ordering(81) 00:17:37.391 fused_ordering(82) 00:17:37.391 fused_ordering(83) 00:17:37.391 fused_ordering(84) 00:17:37.391 fused_ordering(85) 00:17:37.391 fused_ordering(86) 00:17:37.391 fused_ordering(87) 00:17:37.391 fused_ordering(88) 00:17:37.391 fused_ordering(89) 00:17:37.391 fused_ordering(90) 00:17:37.391 fused_ordering(91) 00:17:37.391 fused_ordering(92) 00:17:37.391 fused_ordering(93) 00:17:37.391 fused_ordering(94) 00:17:37.391 fused_ordering(95) 00:17:37.391 fused_ordering(96) 00:17:37.391 fused_ordering(97) 00:17:37.391 fused_ordering(98) 00:17:37.391 fused_ordering(99) 00:17:37.391 fused_ordering(100) 00:17:37.391 fused_ordering(101) 00:17:37.391 fused_ordering(102) 00:17:37.391 fused_ordering(103) 00:17:37.391 fused_ordering(104) 00:17:37.391 fused_ordering(105) 00:17:37.391 fused_ordering(106) 00:17:37.391 fused_ordering(107) 00:17:37.391 fused_ordering(108) 00:17:37.391 fused_ordering(109) 00:17:37.391 fused_ordering(110) 00:17:37.391 fused_ordering(111) 00:17:37.391 fused_ordering(112) 00:17:37.391 fused_ordering(113) 00:17:37.391 fused_ordering(114) 00:17:37.391 fused_ordering(115) 00:17:37.391 fused_ordering(116) 00:17:37.391 fused_ordering(117) 00:17:37.391 fused_ordering(118) 00:17:37.391 fused_ordering(119) 00:17:37.391 fused_ordering(120) 00:17:37.391 fused_ordering(121) 00:17:37.391 fused_ordering(122) 00:17:37.391 fused_ordering(123) 00:17:37.391 fused_ordering(124) 00:17:37.391 fused_ordering(125) 00:17:37.391 fused_ordering(126) 00:17:37.391 fused_ordering(127) 00:17:37.391 fused_ordering(128) 00:17:37.391 fused_ordering(129) 00:17:37.391 fused_ordering(130) 00:17:37.391 fused_ordering(131) 00:17:37.391 fused_ordering(132) 00:17:37.391 fused_ordering(133) 00:17:37.391 fused_ordering(134) 00:17:37.391 fused_ordering(135) 00:17:37.391 fused_ordering(136) 00:17:37.391 fused_ordering(137) 00:17:37.391 fused_ordering(138) 00:17:37.391 fused_ordering(139) 00:17:37.391 fused_ordering(140) 00:17:37.391 fused_ordering(141) 00:17:37.391 fused_ordering(142) 00:17:37.391 fused_ordering(143) 00:17:37.391 fused_ordering(144) 00:17:37.391 fused_ordering(145) 00:17:37.391 fused_ordering(146) 00:17:37.391 fused_ordering(147) 00:17:37.391 fused_ordering(148) 00:17:37.391 fused_ordering(149) 00:17:37.391 fused_ordering(150) 00:17:37.391 fused_ordering(151) 00:17:37.391 fused_ordering(152) 00:17:37.391 fused_ordering(153) 00:17:37.391 fused_ordering(154) 00:17:37.391 fused_ordering(155) 00:17:37.391 fused_ordering(156) 00:17:37.391 fused_ordering(157) 00:17:37.391 fused_ordering(158) 00:17:37.391 fused_ordering(159) 00:17:37.391 fused_ordering(160) 00:17:37.391 fused_ordering(161) 00:17:37.391 fused_ordering(162) 00:17:37.391 fused_ordering(163) 00:17:37.391 fused_ordering(164) 00:17:37.391 fused_ordering(165) 00:17:37.391 fused_ordering(166) 00:17:37.391 fused_ordering(167) 00:17:37.391 fused_ordering(168) 00:17:37.391 fused_ordering(169) 00:17:37.391 fused_ordering(170) 00:17:37.391 fused_ordering(171) 00:17:37.391 fused_ordering(172) 00:17:37.391 fused_ordering(173) 00:17:37.391 fused_ordering(174) 00:17:37.391 fused_ordering(175) 00:17:37.391 fused_ordering(176) 00:17:37.391 fused_ordering(177) 00:17:37.391 fused_ordering(178) 00:17:37.391 fused_ordering(179) 00:17:37.391 fused_ordering(180) 00:17:37.391 fused_ordering(181) 00:17:37.391 fused_ordering(182) 00:17:37.391 fused_ordering(183) 00:17:37.391 fused_ordering(184) 00:17:37.391 fused_ordering(185) 00:17:37.391 fused_ordering(186) 00:17:37.391 fused_ordering(187) 00:17:37.391 fused_ordering(188) 00:17:37.391 fused_ordering(189) 00:17:37.391 fused_ordering(190) 00:17:37.391 fused_ordering(191) 00:17:37.391 fused_ordering(192) 00:17:37.391 fused_ordering(193) 00:17:37.391 fused_ordering(194) 00:17:37.391 fused_ordering(195) 00:17:37.391 fused_ordering(196) 00:17:37.391 fused_ordering(197) 00:17:37.391 fused_ordering(198) 00:17:37.391 fused_ordering(199) 00:17:37.391 fused_ordering(200) 00:17:37.391 fused_ordering(201) 00:17:37.391 fused_ordering(202) 00:17:37.391 fused_ordering(203) 00:17:37.391 fused_ordering(204) 00:17:37.391 fused_ordering(205) 00:17:37.656 fused_ordering(206) 00:17:37.656 fused_ordering(207) 00:17:37.656 fused_ordering(208) 00:17:37.656 fused_ordering(209) 00:17:37.656 fused_ordering(210) 00:17:37.656 fused_ordering(211) 00:17:37.656 fused_ordering(212) 00:17:37.656 fused_ordering(213) 00:17:37.656 fused_ordering(214) 00:17:37.656 fused_ordering(215) 00:17:37.656 fused_ordering(216) 00:17:37.656 fused_ordering(217) 00:17:37.656 fused_ordering(218) 00:17:37.656 fused_ordering(219) 00:17:37.656 fused_ordering(220) 00:17:37.656 fused_ordering(221) 00:17:37.656 fused_ordering(222) 00:17:37.656 fused_ordering(223) 00:17:37.656 fused_ordering(224) 00:17:37.656 fused_ordering(225) 00:17:37.656 fused_ordering(226) 00:17:37.656 fused_ordering(227) 00:17:37.656 fused_ordering(228) 00:17:37.656 fused_ordering(229) 00:17:37.656 fused_ordering(230) 00:17:37.656 fused_ordering(231) 00:17:37.656 fused_ordering(232) 00:17:37.656 fused_ordering(233) 00:17:37.656 fused_ordering(234) 00:17:37.656 fused_ordering(235) 00:17:37.656 fused_ordering(236) 00:17:37.656 fused_ordering(237) 00:17:37.656 fused_ordering(238) 00:17:37.656 fused_ordering(239) 00:17:37.656 fused_ordering(240) 00:17:37.656 fused_ordering(241) 00:17:37.656 fused_ordering(242) 00:17:37.656 fused_ordering(243) 00:17:37.656 fused_ordering(244) 00:17:37.656 fused_ordering(245) 00:17:37.656 fused_ordering(246) 00:17:37.656 fused_ordering(247) 00:17:37.656 fused_ordering(248) 00:17:37.656 fused_ordering(249) 00:17:37.656 fused_ordering(250) 00:17:37.656 fused_ordering(251) 00:17:37.656 fused_ordering(252) 00:17:37.656 fused_ordering(253) 00:17:37.656 fused_ordering(254) 00:17:37.656 fused_ordering(255) 00:17:37.656 fused_ordering(256) 00:17:37.656 fused_ordering(257) 00:17:37.656 fused_ordering(258) 00:17:37.656 fused_ordering(259) 00:17:37.656 fused_ordering(260) 00:17:37.656 fused_ordering(261) 00:17:37.656 fused_ordering(262) 00:17:37.656 fused_ordering(263) 00:17:37.656 fused_ordering(264) 00:17:37.656 fused_ordering(265) 00:17:37.656 fused_ordering(266) 00:17:37.656 fused_ordering(267) 00:17:37.656 fused_ordering(268) 00:17:37.656 fused_ordering(269) 00:17:37.656 fused_ordering(270) 00:17:37.656 fused_ordering(271) 00:17:37.656 fused_ordering(272) 00:17:37.656 fused_ordering(273) 00:17:37.656 fused_ordering(274) 00:17:37.656 fused_ordering(275) 00:17:37.656 fused_ordering(276) 00:17:37.656 fused_ordering(277) 00:17:37.656 fused_ordering(278) 00:17:37.656 fused_ordering(279) 00:17:37.656 fused_ordering(280) 00:17:37.656 fused_ordering(281) 00:17:37.656 fused_ordering(282) 00:17:37.656 fused_ordering(283) 00:17:37.656 fused_ordering(284) 00:17:37.656 fused_ordering(285) 00:17:37.656 fused_ordering(286) 00:17:37.656 fused_ordering(287) 00:17:37.656 fused_ordering(288) 00:17:37.656 fused_ordering(289) 00:17:37.656 fused_ordering(290) 00:17:37.656 fused_ordering(291) 00:17:37.656 fused_ordering(292) 00:17:37.656 fused_ordering(293) 00:17:37.656 fused_ordering(294) 00:17:37.656 fused_ordering(295) 00:17:37.656 fused_ordering(296) 00:17:37.656 fused_ordering(297) 00:17:37.656 fused_ordering(298) 00:17:37.656 fused_ordering(299) 00:17:37.656 fused_ordering(300) 00:17:37.656 fused_ordering(301) 00:17:37.656 fused_ordering(302) 00:17:37.656 fused_ordering(303) 00:17:37.656 fused_ordering(304) 00:17:37.656 fused_ordering(305) 00:17:37.656 fused_ordering(306) 00:17:37.656 fused_ordering(307) 00:17:37.656 fused_ordering(308) 00:17:37.656 fused_ordering(309) 00:17:37.656 fused_ordering(310) 00:17:37.656 fused_ordering(311) 00:17:37.656 fused_ordering(312) 00:17:37.656 fused_ordering(313) 00:17:37.656 fused_ordering(314) 00:17:37.656 fused_ordering(315) 00:17:37.656 fused_ordering(316) 00:17:37.656 fused_ordering(317) 00:17:37.656 fused_ordering(318) 00:17:37.656 fused_ordering(319) 00:17:37.656 fused_ordering(320) 00:17:37.656 fused_ordering(321) 00:17:37.656 fused_ordering(322) 00:17:37.656 fused_ordering(323) 00:17:37.656 fused_ordering(324) 00:17:37.656 fused_ordering(325) 00:17:37.656 fused_ordering(326) 00:17:37.656 fused_ordering(327) 00:17:37.656 fused_ordering(328) 00:17:37.656 fused_ordering(329) 00:17:37.656 fused_ordering(330) 00:17:37.656 fused_ordering(331) 00:17:37.656 fused_ordering(332) 00:17:37.656 fused_ordering(333) 00:17:37.656 fused_ordering(334) 00:17:37.656 fused_ordering(335) 00:17:37.656 fused_ordering(336) 00:17:37.656 fused_ordering(337) 00:17:37.656 fused_ordering(338) 00:17:37.656 fused_ordering(339) 00:17:37.656 fused_ordering(340) 00:17:37.656 fused_ordering(341) 00:17:37.656 fused_ordering(342) 00:17:37.656 fused_ordering(343) 00:17:37.656 fused_ordering(344) 00:17:37.656 fused_ordering(345) 00:17:37.656 fused_ordering(346) 00:17:37.656 fused_ordering(347) 00:17:37.656 fused_ordering(348) 00:17:37.656 fused_ordering(349) 00:17:37.656 fused_ordering(350) 00:17:37.656 fused_ordering(351) 00:17:37.656 fused_ordering(352) 00:17:37.656 fused_ordering(353) 00:17:37.656 fused_ordering(354) 00:17:37.656 fused_ordering(355) 00:17:37.656 fused_ordering(356) 00:17:37.656 fused_ordering(357) 00:17:37.656 fused_ordering(358) 00:17:37.656 fused_ordering(359) 00:17:37.656 fused_ordering(360) 00:17:37.656 fused_ordering(361) 00:17:37.656 fused_ordering(362) 00:17:37.656 fused_ordering(363) 00:17:37.656 fused_ordering(364) 00:17:37.656 fused_ordering(365) 00:17:37.656 fused_ordering(366) 00:17:37.656 fused_ordering(367) 00:17:37.656 fused_ordering(368) 00:17:37.656 fused_ordering(369) 00:17:37.656 fused_ordering(370) 00:17:37.656 fused_ordering(371) 00:17:37.656 fused_ordering(372) 00:17:37.656 fused_ordering(373) 00:17:37.656 fused_ordering(374) 00:17:37.656 fused_ordering(375) 00:17:37.656 fused_ordering(376) 00:17:37.656 fused_ordering(377) 00:17:37.656 fused_ordering(378) 00:17:37.656 fused_ordering(379) 00:17:37.656 fused_ordering(380) 00:17:37.656 fused_ordering(381) 00:17:37.656 fused_ordering(382) 00:17:37.656 fused_ordering(383) 00:17:37.656 fused_ordering(384) 00:17:37.656 fused_ordering(385) 00:17:37.656 fused_ordering(386) 00:17:37.656 fused_ordering(387) 00:17:37.656 fused_ordering(388) 00:17:37.656 fused_ordering(389) 00:17:37.656 fused_ordering(390) 00:17:37.656 fused_ordering(391) 00:17:37.656 fused_ordering(392) 00:17:37.656 fused_ordering(393) 00:17:37.656 fused_ordering(394) 00:17:37.656 fused_ordering(395) 00:17:37.656 fused_ordering(396) 00:17:37.656 fused_ordering(397) 00:17:37.656 fused_ordering(398) 00:17:37.656 fused_ordering(399) 00:17:37.656 fused_ordering(400) 00:17:37.656 fused_ordering(401) 00:17:37.656 fused_ordering(402) 00:17:37.656 fused_ordering(403) 00:17:37.656 fused_ordering(404) 00:17:37.656 fused_ordering(405) 00:17:37.656 fused_ordering(406) 00:17:37.656 fused_ordering(407) 00:17:37.656 fused_ordering(408) 00:17:37.656 fused_ordering(409) 00:17:37.656 fused_ordering(410) 00:17:37.915 fused_ordering(411) 00:17:37.915 fused_ordering(412) 00:17:37.915 fused_ordering(413) 00:17:37.915 fused_ordering(414) 00:17:37.915 fused_ordering(415) 00:17:37.915 fused_ordering(416) 00:17:37.915 fused_ordering(417) 00:17:37.915 fused_ordering(418) 00:17:37.915 fused_ordering(419) 00:17:37.915 fused_ordering(420) 00:17:37.915 fused_ordering(421) 00:17:37.915 fused_ordering(422) 00:17:37.915 fused_ordering(423) 00:17:37.915 fused_ordering(424) 00:17:37.915 fused_ordering(425) 00:17:37.915 fused_ordering(426) 00:17:37.915 fused_ordering(427) 00:17:37.915 fused_ordering(428) 00:17:37.915 fused_ordering(429) 00:17:37.915 fused_ordering(430) 00:17:37.916 fused_ordering(431) 00:17:37.916 fused_ordering(432) 00:17:37.916 fused_ordering(433) 00:17:37.916 fused_ordering(434) 00:17:37.916 fused_ordering(435) 00:17:37.916 fused_ordering(436) 00:17:37.916 fused_ordering(437) 00:17:37.916 fused_ordering(438) 00:17:37.916 fused_ordering(439) 00:17:37.916 fused_ordering(440) 00:17:37.916 fused_ordering(441) 00:17:37.916 fused_ordering(442) 00:17:37.916 fused_ordering(443) 00:17:37.916 fused_ordering(444) 00:17:37.916 fused_ordering(445) 00:17:37.916 fused_ordering(446) 00:17:37.916 fused_ordering(447) 00:17:37.916 fused_ordering(448) 00:17:37.916 fused_ordering(449) 00:17:37.916 fused_ordering(450) 00:17:37.916 fused_ordering(451) 00:17:37.916 fused_ordering(452) 00:17:37.916 fused_ordering(453) 00:17:37.916 fused_ordering(454) 00:17:37.916 fused_ordering(455) 00:17:37.916 fused_ordering(456) 00:17:37.916 fused_ordering(457) 00:17:37.916 fused_ordering(458) 00:17:37.916 fused_ordering(459) 00:17:37.916 fused_ordering(460) 00:17:37.916 fused_ordering(461) 00:17:37.916 fused_ordering(462) 00:17:37.916 fused_ordering(463) 00:17:37.916 fused_ordering(464) 00:17:37.916 fused_ordering(465) 00:17:37.916 fused_ordering(466) 00:17:37.916 fused_ordering(467) 00:17:37.916 fused_ordering(468) 00:17:37.916 fused_ordering(469) 00:17:37.916 fused_ordering(470) 00:17:37.916 fused_ordering(471) 00:17:37.916 fused_ordering(472) 00:17:37.916 fused_ordering(473) 00:17:37.916 fused_ordering(474) 00:17:37.916 fused_ordering(475) 00:17:37.916 fused_ordering(476) 00:17:37.916 fused_ordering(477) 00:17:37.916 fused_ordering(478) 00:17:37.916 fused_ordering(479) 00:17:37.916 fused_ordering(480) 00:17:37.916 fused_ordering(481) 00:17:37.916 fused_ordering(482) 00:17:37.916 fused_ordering(483) 00:17:37.916 fused_ordering(484) 00:17:37.916 fused_ordering(485) 00:17:37.916 fused_ordering(486) 00:17:37.916 fused_ordering(487) 00:17:37.916 fused_ordering(488) 00:17:37.916 fused_ordering(489) 00:17:37.916 fused_ordering(490) 00:17:37.916 fused_ordering(491) 00:17:37.916 fused_ordering(492) 00:17:37.916 fused_ordering(493) 00:17:37.916 fused_ordering(494) 00:17:37.916 fused_ordering(495) 00:17:37.916 fused_ordering(496) 00:17:37.916 fused_ordering(497) 00:17:37.916 fused_ordering(498) 00:17:37.916 fused_ordering(499) 00:17:37.916 fused_ordering(500) 00:17:37.916 fused_ordering(501) 00:17:37.916 fused_ordering(502) 00:17:37.916 fused_ordering(503) 00:17:37.916 fused_ordering(504) 00:17:37.916 fused_ordering(505) 00:17:37.916 fused_ordering(506) 00:17:37.916 fused_ordering(507) 00:17:37.916 fused_ordering(508) 00:17:37.916 fused_ordering(509) 00:17:37.916 fused_ordering(510) 00:17:37.916 fused_ordering(511) 00:17:37.916 fused_ordering(512) 00:17:37.916 fused_ordering(513) 00:17:37.916 fused_ordering(514) 00:17:37.916 fused_ordering(515) 00:17:37.916 fused_ordering(516) 00:17:37.916 fused_ordering(517) 00:17:37.916 fused_ordering(518) 00:17:37.916 fused_ordering(519) 00:17:37.916 fused_ordering(520) 00:17:37.916 fused_ordering(521) 00:17:37.916 fused_ordering(522) 00:17:37.916 fused_ordering(523) 00:17:37.916 fused_ordering(524) 00:17:37.916 fused_ordering(525) 00:17:37.916 fused_ordering(526) 00:17:37.916 fused_ordering(527) 00:17:37.916 fused_ordering(528) 00:17:37.916 fused_ordering(529) 00:17:37.916 fused_ordering(530) 00:17:37.916 fused_ordering(531) 00:17:37.916 fused_ordering(532) 00:17:37.916 fused_ordering(533) 00:17:37.916 fused_ordering(534) 00:17:37.916 fused_ordering(535) 00:17:37.916 fused_ordering(536) 00:17:37.916 fused_ordering(537) 00:17:37.916 fused_ordering(538) 00:17:37.916 fused_ordering(539) 00:17:37.916 fused_ordering(540) 00:17:37.916 fused_ordering(541) 00:17:37.916 fused_ordering(542) 00:17:37.916 fused_ordering(543) 00:17:37.916 fused_ordering(544) 00:17:37.916 fused_ordering(545) 00:17:37.916 fused_ordering(546) 00:17:37.916 fused_ordering(547) 00:17:37.916 fused_ordering(548) 00:17:37.916 fused_ordering(549) 00:17:37.916 fused_ordering(550) 00:17:37.916 fused_ordering(551) 00:17:37.916 fused_ordering(552) 00:17:37.916 fused_ordering(553) 00:17:37.916 fused_ordering(554) 00:17:37.916 fused_ordering(555) 00:17:37.916 fused_ordering(556) 00:17:37.916 fused_ordering(557) 00:17:37.916 fused_ordering(558) 00:17:37.916 fused_ordering(559) 00:17:37.916 fused_ordering(560) 00:17:37.916 fused_ordering(561) 00:17:37.916 fused_ordering(562) 00:17:37.916 fused_ordering(563) 00:17:37.916 fused_ordering(564) 00:17:37.916 fused_ordering(565) 00:17:37.916 fused_ordering(566) 00:17:37.916 fused_ordering(567) 00:17:37.916 fused_ordering(568) 00:17:37.916 fused_ordering(569) 00:17:37.916 fused_ordering(570) 00:17:37.916 fused_ordering(571) 00:17:37.916 fused_ordering(572) 00:17:37.916 fused_ordering(573) 00:17:37.916 fused_ordering(574) 00:17:37.916 fused_ordering(575) 00:17:37.916 fused_ordering(576) 00:17:37.916 fused_ordering(577) 00:17:37.916 fused_ordering(578) 00:17:37.916 fused_ordering(579) 00:17:37.916 fused_ordering(580) 00:17:37.916 fused_ordering(581) 00:17:37.916 fused_ordering(582) 00:17:37.916 fused_ordering(583) 00:17:37.916 fused_ordering(584) 00:17:37.916 fused_ordering(585) 00:17:37.916 fused_ordering(586) 00:17:37.916 fused_ordering(587) 00:17:37.916 fused_ordering(588) 00:17:37.916 fused_ordering(589) 00:17:37.916 fused_ordering(590) 00:17:37.916 fused_ordering(591) 00:17:37.916 fused_ordering(592) 00:17:37.916 fused_ordering(593) 00:17:37.916 fused_ordering(594) 00:17:37.916 fused_ordering(595) 00:17:37.916 fused_ordering(596) 00:17:37.916 fused_ordering(597) 00:17:37.916 fused_ordering(598) 00:17:37.916 fused_ordering(599) 00:17:37.916 fused_ordering(600) 00:17:37.916 fused_ordering(601) 00:17:37.916 fused_ordering(602) 00:17:37.916 fused_ordering(603) 00:17:37.916 fused_ordering(604) 00:17:37.916 fused_ordering(605) 00:17:37.916 fused_ordering(606) 00:17:37.916 fused_ordering(607) 00:17:37.916 fused_ordering(608) 00:17:37.916 fused_ordering(609) 00:17:37.916 fused_ordering(610) 00:17:37.916 fused_ordering(611) 00:17:37.916 fused_ordering(612) 00:17:37.916 fused_ordering(613) 00:17:37.916 fused_ordering(614) 00:17:37.916 fused_ordering(615) 00:17:38.175 fused_ordering(616) 00:17:38.175 fused_ordering(617) 00:17:38.175 fused_ordering(618) 00:17:38.175 fused_ordering(619) 00:17:38.175 fused_ordering(620) 00:17:38.175 fused_ordering(621) 00:17:38.175 fused_ordering(622) 00:17:38.175 fused_ordering(623) 00:17:38.175 fused_ordering(624) 00:17:38.175 fused_ordering(625) 00:17:38.175 fused_ordering(626) 00:17:38.175 fused_ordering(627) 00:17:38.175 fused_ordering(628) 00:17:38.175 fused_ordering(629) 00:17:38.175 fused_ordering(630) 00:17:38.175 fused_ordering(631) 00:17:38.175 fused_ordering(632) 00:17:38.175 fused_ordering(633) 00:17:38.175 fused_ordering(634) 00:17:38.175 fused_ordering(635) 00:17:38.175 fused_ordering(636) 00:17:38.175 fused_ordering(637) 00:17:38.175 fused_ordering(638) 00:17:38.175 fused_ordering(639) 00:17:38.175 fused_ordering(640) 00:17:38.175 fused_ordering(641) 00:17:38.175 fused_ordering(642) 00:17:38.175 fused_ordering(643) 00:17:38.175 fused_ordering(644) 00:17:38.175 fused_ordering(645) 00:17:38.175 fused_ordering(646) 00:17:38.175 fused_ordering(647) 00:17:38.175 fused_ordering(648) 00:17:38.175 fused_ordering(649) 00:17:38.175 fused_ordering(650) 00:17:38.175 fused_ordering(651) 00:17:38.175 fused_ordering(652) 00:17:38.175 fused_ordering(653) 00:17:38.175 fused_ordering(654) 00:17:38.175 fused_ordering(655) 00:17:38.175 fused_ordering(656) 00:17:38.175 fused_ordering(657) 00:17:38.175 fused_ordering(658) 00:17:38.175 fused_ordering(659) 00:17:38.175 fused_ordering(660) 00:17:38.175 fused_ordering(661) 00:17:38.175 fused_ordering(662) 00:17:38.175 fused_ordering(663) 00:17:38.175 fused_ordering(664) 00:17:38.175 fused_ordering(665) 00:17:38.175 fused_ordering(666) 00:17:38.175 fused_ordering(667) 00:17:38.175 fused_ordering(668) 00:17:38.175 fused_ordering(669) 00:17:38.175 fused_ordering(670) 00:17:38.175 fused_ordering(671) 00:17:38.175 fused_ordering(672) 00:17:38.175 fused_ordering(673) 00:17:38.175 fused_ordering(674) 00:17:38.175 fused_ordering(675) 00:17:38.175 fused_ordering(676) 00:17:38.175 fused_ordering(677) 00:17:38.175 fused_ordering(678) 00:17:38.175 fused_ordering(679) 00:17:38.175 fused_ordering(680) 00:17:38.175 fused_ordering(681) 00:17:38.175 fused_ordering(682) 00:17:38.175 fused_ordering(683) 00:17:38.175 fused_ordering(684) 00:17:38.175 fused_ordering(685) 00:17:38.175 fused_ordering(686) 00:17:38.175 fused_ordering(687) 00:17:38.175 fused_ordering(688) 00:17:38.175 fused_ordering(689) 00:17:38.175 fused_ordering(690) 00:17:38.175 fused_ordering(691) 00:17:38.175 fused_ordering(692) 00:17:38.175 fused_ordering(693) 00:17:38.175 fused_ordering(694) 00:17:38.175 fused_ordering(695) 00:17:38.175 fused_ordering(696) 00:17:38.175 fused_ordering(697) 00:17:38.175 fused_ordering(698) 00:17:38.175 fused_ordering(699) 00:17:38.175 fused_ordering(700) 00:17:38.175 fused_ordering(701) 00:17:38.175 fused_ordering(702) 00:17:38.175 fused_ordering(703) 00:17:38.175 fused_ordering(704) 00:17:38.175 fused_ordering(705) 00:17:38.175 fused_ordering(706) 00:17:38.175 fused_ordering(707) 00:17:38.175 fused_ordering(708) 00:17:38.175 fused_ordering(709) 00:17:38.175 fused_ordering(710) 00:17:38.175 fused_ordering(711) 00:17:38.175 fused_ordering(712) 00:17:38.175 fused_ordering(713) 00:17:38.175 fused_ordering(714) 00:17:38.175 fused_ordering(715) 00:17:38.175 fused_ordering(716) 00:17:38.175 fused_ordering(717) 00:17:38.175 fused_ordering(718) 00:17:38.175 fused_ordering(719) 00:17:38.175 fused_ordering(720) 00:17:38.175 fused_ordering(721) 00:17:38.175 fused_ordering(722) 00:17:38.175 fused_ordering(723) 00:17:38.175 fused_ordering(724) 00:17:38.175 fused_ordering(725) 00:17:38.175 fused_ordering(726) 00:17:38.175 fused_ordering(727) 00:17:38.175 fused_ordering(728) 00:17:38.175 fused_ordering(729) 00:17:38.176 fused_ordering(730) 00:17:38.176 fused_ordering(731) 00:17:38.176 fused_ordering(732) 00:17:38.176 fused_ordering(733) 00:17:38.176 fused_ordering(734) 00:17:38.176 fused_ordering(735) 00:17:38.176 fused_ordering(736) 00:17:38.176 fused_ordering(737) 00:17:38.176 fused_ordering(738) 00:17:38.176 fused_ordering(739) 00:17:38.176 fused_ordering(740) 00:17:38.176 fused_ordering(741) 00:17:38.176 fused_ordering(742) 00:17:38.176 fused_ordering(743) 00:17:38.176 fused_ordering(744) 00:17:38.176 fused_ordering(745) 00:17:38.176 fused_ordering(746) 00:17:38.176 fused_ordering(747) 00:17:38.176 fused_ordering(748) 00:17:38.176 fused_ordering(749) 00:17:38.176 fused_ordering(750) 00:17:38.176 fused_ordering(751) 00:17:38.176 fused_ordering(752) 00:17:38.176 fused_ordering(753) 00:17:38.176 fused_ordering(754) 00:17:38.176 fused_ordering(755) 00:17:38.176 fused_ordering(756) 00:17:38.176 fused_ordering(757) 00:17:38.176 fused_ordering(758) 00:17:38.176 fused_ordering(759) 00:17:38.176 fused_ordering(760) 00:17:38.176 fused_ordering(761) 00:17:38.176 fused_ordering(762) 00:17:38.176 fused_ordering(763) 00:17:38.176 fused_ordering(764) 00:17:38.176 fused_ordering(765) 00:17:38.176 fused_ordering(766) 00:17:38.176 fused_ordering(767) 00:17:38.176 fused_ordering(768) 00:17:38.176 fused_ordering(769) 00:17:38.176 fused_ordering(770) 00:17:38.176 fused_ordering(771) 00:17:38.176 fused_ordering(772) 00:17:38.176 fused_ordering(773) 00:17:38.176 fused_ordering(774) 00:17:38.176 fused_ordering(775) 00:17:38.176 fused_ordering(776) 00:17:38.176 fused_ordering(777) 00:17:38.176 fused_ordering(778) 00:17:38.176 fused_ordering(779) 00:17:38.176 fused_ordering(780) 00:17:38.176 fused_ordering(781) 00:17:38.176 fused_ordering(782) 00:17:38.176 fused_ordering(783) 00:17:38.176 fused_ordering(784) 00:17:38.176 fused_ordering(785) 00:17:38.176 fused_ordering(786) 00:17:38.176 fused_ordering(787) 00:17:38.176 fused_ordering(788) 00:17:38.176 fused_ordering(789) 00:17:38.176 fused_ordering(790) 00:17:38.176 fused_ordering(791) 00:17:38.176 fused_ordering(792) 00:17:38.176 fused_ordering(793) 00:17:38.176 fused_ordering(794) 00:17:38.176 fused_ordering(795) 00:17:38.176 fused_ordering(796) 00:17:38.176 fused_ordering(797) 00:17:38.176 fused_ordering(798) 00:17:38.176 fused_ordering(799) 00:17:38.176 fused_ordering(800) 00:17:38.176 fused_ordering(801) 00:17:38.176 fused_ordering(802) 00:17:38.176 fused_ordering(803) 00:17:38.176 fused_ordering(804) 00:17:38.176 fused_ordering(805) 00:17:38.176 fused_ordering(806) 00:17:38.176 fused_ordering(807) 00:17:38.176 fused_ordering(808) 00:17:38.176 fused_ordering(809) 00:17:38.176 fused_ordering(810) 00:17:38.176 fused_ordering(811) 00:17:38.176 fused_ordering(812) 00:17:38.176 fused_ordering(813) 00:17:38.176 fused_ordering(814) 00:17:38.176 fused_ordering(815) 00:17:38.176 fused_ordering(816) 00:17:38.176 fused_ordering(817) 00:17:38.176 fused_ordering(818) 00:17:38.176 fused_ordering(819) 00:17:38.176 fused_ordering(820) 00:17:38.745 fused_o[2024-12-13 12:23:06.164465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03110 is same with the state(6) to be set 00:17:38.745 rdering(821) 00:17:38.745 fused_ordering(822) 00:17:38.745 fused_ordering(823) 00:17:38.745 fused_ordering(824) 00:17:38.745 fused_ordering(825) 00:17:38.745 fused_ordering(826) 00:17:38.745 fused_ordering(827) 00:17:38.745 fused_ordering(828) 00:17:38.745 fused_ordering(829) 00:17:38.745 fused_ordering(830) 00:17:38.745 fused_ordering(831) 00:17:38.745 fused_ordering(832) 00:17:38.745 fused_ordering(833) 00:17:38.745 fused_ordering(834) 00:17:38.745 fused_ordering(835) 00:17:38.745 fused_ordering(836) 00:17:38.745 fused_ordering(837) 00:17:38.745 fused_ordering(838) 00:17:38.745 fused_ordering(839) 00:17:38.745 fused_ordering(840) 00:17:38.745 fused_ordering(841) 00:17:38.745 fused_ordering(842) 00:17:38.745 fused_ordering(843) 00:17:38.745 fused_ordering(844) 00:17:38.745 fused_ordering(845) 00:17:38.745 fused_ordering(846) 00:17:38.745 fused_ordering(847) 00:17:38.745 fused_ordering(848) 00:17:38.745 fused_ordering(849) 00:17:38.745 fused_ordering(850) 00:17:38.745 fused_ordering(851) 00:17:38.745 fused_ordering(852) 00:17:38.745 fused_ordering(853) 00:17:38.745 fused_ordering(854) 00:17:38.745 fused_ordering(855) 00:17:38.745 fused_ordering(856) 00:17:38.745 fused_ordering(857) 00:17:38.745 fused_ordering(858) 00:17:38.745 fused_ordering(859) 00:17:38.745 fused_ordering(860) 00:17:38.745 fused_ordering(861) 00:17:38.745 fused_ordering(862) 00:17:38.745 fused_ordering(863) 00:17:38.745 fused_ordering(864) 00:17:38.745 fused_ordering(865) 00:17:38.745 fused_ordering(866) 00:17:38.745 fused_ordering(867) 00:17:38.745 fused_ordering(868) 00:17:38.745 fused_ordering(869) 00:17:38.745 fused_ordering(870) 00:17:38.745 fused_ordering(871) 00:17:38.745 fused_ordering(872) 00:17:38.745 fused_ordering(873) 00:17:38.745 fused_ordering(874) 00:17:38.745 fused_ordering(875) 00:17:38.745 fused_ordering(876) 00:17:38.745 fused_ordering(877) 00:17:38.745 fused_ordering(878) 00:17:38.745 fused_ordering(879) 00:17:38.745 fused_ordering(880) 00:17:38.745 fused_ordering(881) 00:17:38.745 fused_ordering(882) 00:17:38.745 fused_ordering(883) 00:17:38.745 fused_ordering(884) 00:17:38.745 fused_ordering(885) 00:17:38.745 fused_ordering(886) 00:17:38.745 fused_ordering(887) 00:17:38.745 fused_ordering(888) 00:17:38.745 fused_ordering(889) 00:17:38.745 fused_ordering(890) 00:17:38.745 fused_ordering(891) 00:17:38.745 fused_ordering(892) 00:17:38.745 fused_ordering(893) 00:17:38.745 fused_ordering(894) 00:17:38.745 fused_ordering(895) 00:17:38.745 fused_ordering(896) 00:17:38.745 fused_ordering(897) 00:17:38.745 fused_ordering(898) 00:17:38.745 fused_ordering(899) 00:17:38.745 fused_ordering(900) 00:17:38.745 fused_ordering(901) 00:17:38.745 fused_ordering(902) 00:17:38.745 fused_ordering(903) 00:17:38.745 fused_ordering(904) 00:17:38.745 fused_ordering(905) 00:17:38.745 fused_ordering(906) 00:17:38.745 fused_ordering(907) 00:17:38.745 fused_ordering(908) 00:17:38.745 fused_ordering(909) 00:17:38.745 fused_ordering(910) 00:17:38.745 fused_ordering(911) 00:17:38.745 fused_ordering(912) 00:17:38.745 fused_ordering(913) 00:17:38.745 fused_ordering(914) 00:17:38.745 fused_ordering(915) 00:17:38.745 fused_ordering(916) 00:17:38.745 fused_ordering(917) 00:17:38.745 fused_ordering(918) 00:17:38.745 fused_ordering(919) 00:17:38.745 fused_ordering(920) 00:17:38.745 fused_ordering(921) 00:17:38.745 fused_ordering(922) 00:17:38.745 fused_ordering(923) 00:17:38.745 fused_ordering(924) 00:17:38.745 fused_ordering(925) 00:17:38.745 fused_ordering(926) 00:17:38.745 fused_ordering(927) 00:17:38.745 fused_ordering(928) 00:17:38.745 fused_ordering(929) 00:17:38.745 fused_ordering(930) 00:17:38.745 fused_ordering(931) 00:17:38.745 fused_ordering(932) 00:17:38.745 fused_ordering(933) 00:17:38.745 fused_ordering(934) 00:17:38.745 fused_ordering(935) 00:17:38.745 fused_ordering(936) 00:17:38.745 fused_ordering(937) 00:17:38.745 fused_ordering(938) 00:17:38.745 fused_ordering(939) 00:17:38.745 fused_ordering(940) 00:17:38.745 fused_ordering(941) 00:17:38.745 fused_ordering(942) 00:17:38.745 fused_ordering(943) 00:17:38.745 fused_ordering(944) 00:17:38.745 fused_ordering(945) 00:17:38.745 fused_ordering(946) 00:17:38.745 fused_ordering(947) 00:17:38.745 fused_ordering(948) 00:17:38.745 fused_ordering(949) 00:17:38.745 fused_ordering(950) 00:17:38.745 fused_ordering(951) 00:17:38.745 fused_ordering(952) 00:17:38.745 fused_ordering(953) 00:17:38.745 fused_ordering(954) 00:17:38.745 fused_ordering(955) 00:17:38.745 fused_ordering(956) 00:17:38.745 fused_ordering(957) 00:17:38.745 fused_ordering(958) 00:17:38.745 fused_ordering(959) 00:17:38.745 fused_ordering(960) 00:17:38.745 fused_ordering(961) 00:17:38.745 fused_ordering(962) 00:17:38.745 fused_ordering(963) 00:17:38.745 fused_ordering(964) 00:17:38.745 fused_ordering(965) 00:17:38.745 fused_ordering(966) 00:17:38.745 fused_ordering(967) 00:17:38.745 fused_ordering(968) 00:17:38.745 fused_ordering(969) 00:17:38.745 fused_ordering(970) 00:17:38.745 fused_ordering(971) 00:17:38.745 fused_ordering(972) 00:17:38.745 fused_ordering(973) 00:17:38.745 fused_ordering(974) 00:17:38.745 fused_ordering(975) 00:17:38.745 fused_ordering(976) 00:17:38.745 fused_ordering(977) 00:17:38.745 fused_ordering(978) 00:17:38.745 fused_ordering(979) 00:17:38.745 fused_ordering(980) 00:17:38.745 fused_ordering(981) 00:17:38.745 fused_ordering(982) 00:17:38.745 fused_ordering(983) 00:17:38.745 fused_ordering(984) 00:17:38.745 fused_ordering(985) 00:17:38.745 fused_ordering(986) 00:17:38.745 fused_ordering(987) 00:17:38.745 fused_ordering(988) 00:17:38.745 fused_ordering(989) 00:17:38.745 fused_ordering(990) 00:17:38.745 fused_ordering(991) 00:17:38.745 fused_ordering(992) 00:17:38.745 fused_ordering(993) 00:17:38.745 fused_ordering(994) 00:17:38.745 fused_ordering(995) 00:17:38.745 fused_ordering(996) 00:17:38.745 fused_ordering(997) 00:17:38.745 fused_ordering(998) 00:17:38.745 fused_ordering(999) 00:17:38.745 fused_ordering(1000) 00:17:38.745 fused_ordering(1001) 00:17:38.745 fused_ordering(1002) 00:17:38.745 fused_ordering(1003) 00:17:38.745 fused_ordering(1004) 00:17:38.745 fused_ordering(1005) 00:17:38.745 fused_ordering(1006) 00:17:38.745 fused_ordering(1007) 00:17:38.745 fused_ordering(1008) 00:17:38.745 fused_ordering(1009) 00:17:38.745 fused_ordering(1010) 00:17:38.745 fused_ordering(1011) 00:17:38.745 fused_ordering(1012) 00:17:38.745 fused_ordering(1013) 00:17:38.745 fused_ordering(1014) 00:17:38.745 fused_ordering(1015) 00:17:38.745 fused_ordering(1016) 00:17:38.745 fused_ordering(1017) 00:17:38.745 fused_ordering(1018) 00:17:38.745 fused_ordering(1019) 00:17:38.745 fused_ordering(1020) 00:17:38.745 fused_ordering(1021) 00:17:38.745 fused_ordering(1022) 00:17:38.745 fused_ordering(1023) 00:17:38.745 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:38.745 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:38.745 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.746 rmmod nvme_tcp 00:17:38.746 rmmod nvme_fabrics 00:17:38.746 rmmod nvme_keyring 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 284382 ']' 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 284382 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 284382 ']' 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 284382 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284382 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284382' 00:17:38.746 killing process with pid 284382 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 284382 00:17:38.746 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 284382 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.005 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:40.912 00:17:40.912 real 0m10.406s 00:17:40.912 user 0m5.007s 00:17:40.912 sys 0m5.382s 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.912 ************************************ 00:17:40.912 END TEST nvmf_fused_ordering 00:17:40.912 ************************************ 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.912 ************************************ 00:17:40.912 START TEST nvmf_ns_masking 00:17:40.912 ************************************ 00:17:40.912 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:41.172 * Looking for test storage... 00:17:41.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.173 --rc genhtml_branch_coverage=1 00:17:41.173 --rc genhtml_function_coverage=1 00:17:41.173 --rc genhtml_legend=1 00:17:41.173 --rc geninfo_all_blocks=1 00:17:41.173 --rc geninfo_unexecuted_blocks=1 00:17:41.173 00:17:41.173 ' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.173 --rc genhtml_branch_coverage=1 00:17:41.173 --rc genhtml_function_coverage=1 00:17:41.173 --rc genhtml_legend=1 00:17:41.173 --rc geninfo_all_blocks=1 00:17:41.173 --rc geninfo_unexecuted_blocks=1 00:17:41.173 00:17:41.173 ' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.173 --rc genhtml_branch_coverage=1 00:17:41.173 --rc genhtml_function_coverage=1 00:17:41.173 --rc genhtml_legend=1 00:17:41.173 --rc geninfo_all_blocks=1 00:17:41.173 --rc geninfo_unexecuted_blocks=1 00:17:41.173 00:17:41.173 ' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.173 --rc genhtml_branch_coverage=1 00:17:41.173 --rc genhtml_function_coverage=1 00:17:41.173 --rc genhtml_legend=1 00:17:41.173 --rc geninfo_all_blocks=1 00:17:41.173 --rc geninfo_unexecuted_blocks=1 00:17:41.173 00:17:41.173 ' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:41.173 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=15b840d4-abf6-46a0-8961-183305543a0a 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=23367057-8bef-4f49-b213-1b1f911abc88 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0c9b56df-2987-4a32-8be0-339f262aec60 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.174 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:47.749 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:47.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:47.749 Found net devices under 0000:af:00.0: cvl_0_0 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:47.749 Found net devices under 0000:af:00.1: cvl_0_1 00:17:47.749 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:47.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:17:47.750 00:17:47.750 --- 10.0.0.2 ping statistics --- 00:17:47.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.750 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:47.750 00:17:47.750 --- 10.0.0.1 ping statistics --- 00:17:47.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.750 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=288308 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 288308 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 288308 ']' 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.750 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.750 [2024-12-13 12:23:14.931545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:47.750 [2024-12-13 12:23:14.931593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.750 [2024-12-13 12:23:15.008275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.750 [2024-12-13 12:23:15.030483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.750 [2024-12-13 12:23:15.030523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.750 [2024-12-13 12:23:15.030531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.750 [2024-12-13 12:23:15.030538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.750 [2024-12-13 12:23:15.030543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.750 [2024-12-13 12:23:15.031064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:47.750 [2024-12-13 12:23:15.330459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:47.750 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:48.009 Malloc1 00:17:48.009 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:48.268 Malloc2 00:17:48.268 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:48.268 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:48.528 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.787 [2024-12-13 12:23:16.340645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.787 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:48.787 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0c9b56df-2987-4a32-8be0-339f262aec60 -a 10.0.0.2 -s 4420 -i 4 00:17:49.046 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.046 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:49.046 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.046 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:49.046 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.951 [ 0]:0x1 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.951 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cdcc530bd354c348c33cc7a2de8413e 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cdcc530bd354c348c33cc7a2de8413e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:51.210 [ 0]:0x1 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:51.210 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cdcc530bd354c348c33cc7a2de8413e 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cdcc530bd354c348c33cc7a2de8413e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.470 [ 1]:0x2 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:51.470 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.470 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:51.470 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.470 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:51.470 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.730 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:51.989 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:51.989 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:51.989 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0c9b56df-2987-4a32-8be0-339f262aec60 -a 10.0.0.2 -s 4420 -i 4 00:17:52.248 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:52.249 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.249 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.249 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:52.249 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:52.249 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:54.154 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.414 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.414 [ 0]:0x2 00:17:54.414 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.414 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.414 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:54.414 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.414 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.674 [ 0]:0x1 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cdcc530bd354c348c33cc7a2de8413e 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cdcc530bd354c348c33cc7a2de8413e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.674 [ 1]:0x2 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.674 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.933 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:55.193 [ 0]:0x2 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:55.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.193 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:55.451 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0c9b56df-2987-4a32-8be0-339f262aec60 -a 10.0.0.2 -s 4420 -i 4 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:55.452 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.988 [ 0]:0x1 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cdcc530bd354c348c33cc7a2de8413e 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cdcc530bd354c348c33cc7a2de8413e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.988 [ 1]:0x2 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:57.988 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.989 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.248 [ 0]:0x2 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.248 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:58.249 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:58.508 [2024-12-13 12:23:25.971737] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:58.508 request: 00:17:58.508 { 00:17:58.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.508 "nsid": 2, 00:17:58.508 "host": "nqn.2016-06.io.spdk:host1", 00:17:58.508 "method": "nvmf_ns_remove_host", 00:17:58.508 "req_id": 1 00:17:58.508 } 00:17:58.508 Got JSON-RPC error response 00:17:58.508 response: 00:17:58.508 { 00:17:58.508 "code": -32602, 00:17:58.508 "message": "Invalid parameters" 00:17:58.508 } 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.508 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.508 [ 0]:0x2 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f83ca25615f4c51a54afd965b5e9631 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f83ca25615f4c51a54afd965b5e9631 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:58.508 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.768 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=290261 00:17:58.768 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:58.768 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.768 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 290261 /var/tmp/host.sock 00:17:58.768 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 290261 ']' 00:17:58.768 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:58.769 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.769 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:58.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:58.769 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.769 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:58.769 [2024-12-13 12:23:26.327252] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:58.769 [2024-12-13 12:23:26.327296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290261 ] 00:17:58.769 [2024-12-13 12:23:26.400974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.769 [2024-12-13 12:23:26.423352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.028 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.028 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:59.028 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:59.287 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:59.545 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 15b840d4-abf6-46a0-8961-183305543a0a 00:17:59.545 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:59.545 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 15B840D4ABF646A08961183305543A0A -i 00:17:59.545 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 23367057-8bef-4f49-b213-1b1f911abc88 00:17:59.545 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:59.545 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 233670578BEF4F49B2131B1F911ABC88 -i 00:17:59.804 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:00.064 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:00.064 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:00.064 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:00.633 nvme0n1 00:18:00.633 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:00.633 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:00.893 nvme1n2 00:18:00.893 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:00.893 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:00.893 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:00.893 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:00.893 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:00.893 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 15b840d4-abf6-46a0-8961-183305543a0a == \1\5\b\8\4\0\d\4\-\a\b\f\6\-\4\6\a\0\-\8\9\6\1\-\1\8\3\3\0\5\5\4\3\a\0\a ]] 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:01.152 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:01.411 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 23367057-8bef-4f49-b213-1b1f911abc88 == \2\3\3\6\7\0\5\7\-\8\b\e\f\-\4\f\4\9\-\b\2\1\3\-\1\b\1\f\9\1\1\a\b\c\8\8 ]] 00:18:01.411 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:01.670 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 15b840d4-abf6-46a0-8961-183305543a0a 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 15B840D4ABF646A08961183305543A0A 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 15B840D4ABF646A08961183305543A0A 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.930 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 15B840D4ABF646A08961183305543A0A 00:18:01.931 [2024-12-13 12:23:29.557586] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:01.931 [2024-12-13 12:23:29.557619] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:01.931 [2024-12-13 12:23:29.557628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 request: 00:18:01.931 { 00:18:01.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.931 "namespace": { 00:18:01.931 "bdev_name": "invalid", 00:18:01.931 "nsid": 1, 00:18:01.931 "nguid": "15B840D4ABF646A08961183305543A0A", 00:18:01.931 "no_auto_visible": false, 00:18:01.931 "hide_metadata": false 00:18:01.931 }, 00:18:01.931 "method": "nvmf_subsystem_add_ns", 00:18:01.931 "req_id": 1 00:18:01.931 } 00:18:01.931 Got JSON-RPC error response 00:18:01.931 response: 00:18:01.931 { 00:18:01.931 "code": -32602, 00:18:01.931 "message": "Invalid parameters" 00:18:01.931 } 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 15b840d4-abf6-46a0-8961-183305543a0a 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:01.931 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 15B840D4ABF646A08961183305543A0A -i 00:18:02.190 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 290261 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 290261 ']' 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 290261 00:18:04.726 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290261 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290261' 00:18:04.726 killing process with pid 290261 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 290261 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 290261 00:18:04.726 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.986 rmmod nvme_tcp 00:18:04.986 rmmod nvme_fabrics 00:18:04.986 rmmod nvme_keyring 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 288308 ']' 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 288308 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 288308 ']' 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 288308 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288308 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288308' 00:18:04.986 killing process with pid 288308 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 288308 00:18:04.986 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 288308 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:05.245 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:05.246 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.246 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.246 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:07.784 00:18:07.784 real 0m26.327s 00:18:07.784 user 0m31.012s 00:18:07.784 sys 0m7.040s 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:07.784 ************************************ 00:18:07.784 END TEST nvmf_ns_masking 00:18:07.784 ************************************ 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:07.784 ************************************ 00:18:07.784 START TEST nvmf_nvme_cli 00:18:07.784 ************************************ 00:18:07.784 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:07.784 * Looking for test storage... 00:18:07.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.784 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:07.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.785 --rc genhtml_branch_coverage=1 00:18:07.785 --rc genhtml_function_coverage=1 00:18:07.785 --rc genhtml_legend=1 00:18:07.785 --rc geninfo_all_blocks=1 00:18:07.785 --rc geninfo_unexecuted_blocks=1 00:18:07.785 00:18:07.785 ' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.785 --rc genhtml_branch_coverage=1 00:18:07.785 --rc genhtml_function_coverage=1 00:18:07.785 --rc genhtml_legend=1 00:18:07.785 --rc geninfo_all_blocks=1 00:18:07.785 --rc geninfo_unexecuted_blocks=1 00:18:07.785 00:18:07.785 ' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.785 --rc genhtml_branch_coverage=1 00:18:07.785 --rc genhtml_function_coverage=1 00:18:07.785 --rc genhtml_legend=1 00:18:07.785 --rc geninfo_all_blocks=1 00:18:07.785 --rc geninfo_unexecuted_blocks=1 00:18:07.785 00:18:07.785 ' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.785 --rc genhtml_branch_coverage=1 00:18:07.785 --rc genhtml_function_coverage=1 00:18:07.785 --rc genhtml_legend=1 00:18:07.785 --rc geninfo_all_blocks=1 00:18:07.785 --rc geninfo_unexecuted_blocks=1 00:18:07.785 00:18:07.785 ' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:07.785 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:14.360 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:14.360 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:14.360 Found net devices under 0000:af:00.0: cvl_0_0 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.360 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:14.361 Found net devices under 0000:af:00.1: cvl_0_1 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:14.361 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:14.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:18:14.361 00:18:14.361 --- 10.0.0.2 ping statistics --- 00:18:14.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.361 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:18:14.361 00:18:14.361 --- 10.0.0.1 ping statistics --- 00:18:14.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.361 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=294778 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 294778 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 294778 ']' 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 [2024-12-13 12:23:41.146938] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:14.361 [2024-12-13 12:23:41.146981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.361 [2024-12-13 12:23:41.225261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.361 [2024-12-13 12:23:41.249758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.361 [2024-12-13 12:23:41.249796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.361 [2024-12-13 12:23:41.249804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.361 [2024-12-13 12:23:41.249811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.361 [2024-12-13 12:23:41.249816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.361 [2024-12-13 12:23:41.251207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.361 [2024-12-13 12:23:41.251237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.361 [2024-12-13 12:23:41.251343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.361 [2024-12-13 12:23:41.251343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 [2024-12-13 12:23:41.383045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 Malloc0 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 Malloc1 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.361 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.362 [2024-12-13 12:23:41.471762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:14.362 00:18:14.362 Discovery Log Number of Records 2, Generation counter 2 00:18:14.362 =====Discovery Log Entry 0====== 00:18:14.362 trtype: tcp 00:18:14.362 adrfam: ipv4 00:18:14.362 subtype: current discovery subsystem 00:18:14.362 treq: not required 00:18:14.362 portid: 0 00:18:14.362 trsvcid: 4420 00:18:14.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:14.362 traddr: 10.0.0.2 00:18:14.362 eflags: explicit discovery connections, duplicate discovery information 00:18:14.362 sectype: none 00:18:14.362 =====Discovery Log Entry 1====== 00:18:14.362 trtype: tcp 00:18:14.362 adrfam: ipv4 00:18:14.362 subtype: nvme subsystem 00:18:14.362 treq: not required 00:18:14.362 portid: 0 00:18:14.362 trsvcid: 4420 00:18:14.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:14.362 traddr: 10.0.0.2 00:18:14.362 eflags: none 00:18:14.362 sectype: none 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:14.362 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:15.300 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:15.300 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:15.300 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.300 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:15.300 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:15.300 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.207 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:17.467 /dev/nvme0n2 ]] 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:17.467 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.726 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.726 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:17.726 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:17.726 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.726 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:17.727 rmmod nvme_tcp 00:18:17.727 rmmod nvme_fabrics 00:18:17.727 rmmod nvme_keyring 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 294778 ']' 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 294778 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 294778 ']' 00:18:17.727 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 294778 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294778 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294778' 00:18:17.986 killing process with pid 294778 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 294778 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 294778 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.986 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:20.525 00:18:20.525 real 0m12.766s 00:18:20.525 user 0m19.327s 00:18:20.525 sys 0m5.030s 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 ************************************ 00:18:20.525 END TEST nvmf_nvme_cli 00:18:20.525 ************************************ 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 ************************************ 00:18:20.525 START TEST nvmf_vfio_user 00:18:20.525 ************************************ 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:20.525 * Looking for test storage... 00:18:20.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:20.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.525 --rc genhtml_branch_coverage=1 00:18:20.525 --rc genhtml_function_coverage=1 00:18:20.525 --rc genhtml_legend=1 00:18:20.525 --rc geninfo_all_blocks=1 00:18:20.525 --rc geninfo_unexecuted_blocks=1 00:18:20.525 00:18:20.525 ' 00:18:20.525 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:20.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.525 --rc genhtml_branch_coverage=1 00:18:20.525 --rc genhtml_function_coverage=1 00:18:20.526 --rc genhtml_legend=1 00:18:20.526 --rc geninfo_all_blocks=1 00:18:20.526 --rc geninfo_unexecuted_blocks=1 00:18:20.526 00:18:20.526 ' 00:18:20.526 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:20.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.526 --rc genhtml_branch_coverage=1 00:18:20.526 --rc genhtml_function_coverage=1 00:18:20.526 --rc genhtml_legend=1 00:18:20.526 --rc geninfo_all_blocks=1 00:18:20.526 --rc geninfo_unexecuted_blocks=1 00:18:20.526 00:18:20.526 ' 00:18:20.526 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:20.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.526 --rc genhtml_branch_coverage=1 00:18:20.526 --rc genhtml_function_coverage=1 00:18:20.526 --rc genhtml_legend=1 00:18:20.526 --rc geninfo_all_blocks=1 00:18:20.526 --rc geninfo_unexecuted_blocks=1 00:18:20.526 00:18:20.526 ' 00:18:20.526 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.526 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296021 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296021' 00:18:20.526 Process pid: 296021 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296021 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296021 ']' 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.526 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:20.526 [2024-12-13 12:23:48.082431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:20.526 [2024-12-13 12:23:48.082480] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.526 [2024-12-13 12:23:48.156930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.526 [2024-12-13 12:23:48.179071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.526 [2024-12-13 12:23:48.179112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.526 [2024-12-13 12:23:48.179119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.526 [2024-12-13 12:23:48.179125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.526 [2024-12-13 12:23:48.179130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.526 [2024-12-13 12:23:48.180437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.526 [2024-12-13 12:23:48.180545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.526 [2024-12-13 12:23:48.180653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.526 [2024-12-13 12:23:48.180655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.786 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.786 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:20.786 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:21.723 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:21.982 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:21.982 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:21.982 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:21.982 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:21.982 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:22.242 Malloc1 00:18:22.242 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:22.242 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:22.502 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:22.761 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:22.761 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:22.761 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:23.020 Malloc2 00:18:23.020 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:23.279 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:23.279 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:23.538 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:23.538 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:23.538 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:23.538 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:23.539 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:23.539 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:23.539 [2024-12-13 12:23:51.199094] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:23.539 [2024-12-13 12:23:51.199127] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296611 ] 00:18:23.539 [2024-12-13 12:23:51.238091] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:23.800 [2024-12-13 12:23:51.243477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:23.800 [2024-12-13 12:23:51.243495] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa50c398000 00:18:23.800 [2024-12-13 12:23:51.244476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.245476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.246477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.247486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.248493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.249498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.250506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.251513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:23.800 [2024-12-13 12:23:51.252519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:23.800 [2024-12-13 12:23:51.252527] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa50b0a1000 00:18:23.800 [2024-12-13 12:23:51.253431] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:23.800 [2024-12-13 12:23:51.262827] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:23.800 [2024-12-13 12:23:51.262852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:23.800 [2024-12-13 12:23:51.268627] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:23.800 [2024-12-13 12:23:51.268663] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:23.800 [2024-12-13 12:23:51.268734] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:23.800 [2024-12-13 12:23:51.268750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:23.800 [2024-12-13 12:23:51.268755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:23.800 [2024-12-13 12:23:51.269623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:23.800 [2024-12-13 12:23:51.269632] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:23.800 [2024-12-13 12:23:51.269638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:23.800 [2024-12-13 12:23:51.270625] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:23.801 [2024-12-13 12:23:51.270632] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:23.801 [2024-12-13 12:23:51.270639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:23.801 [2024-12-13 12:23:51.271628] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:23.801 [2024-12-13 12:23:51.271635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:23.801 [2024-12-13 12:23:51.272638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:23.801 [2024-12-13 12:23:51.272644] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:23.801 [2024-12-13 12:23:51.272649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:23.801 [2024-12-13 12:23:51.272655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:23.801 [2024-12-13 12:23:51.272765] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:23.801 [2024-12-13 12:23:51.272769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:23.801 [2024-12-13 12:23:51.272775] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:23.801 [2024-12-13 12:23:51.273647] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:23.801 [2024-12-13 12:23:51.274652] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:23.801 [2024-12-13 12:23:51.275658] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:23.801 [2024-12-13 12:23:51.276661] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.801 [2024-12-13 12:23:51.276740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:23.801 [2024-12-13 12:23:51.277674] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:23.801 [2024-12-13 12:23:51.277682] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:23.801 [2024-12-13 12:23:51.277686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:23.801 [2024-12-13 12:23:51.277709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277720] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:23.801 [2024-12-13 12:23:51.277725] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.801 [2024-12-13 12:23:51.277729] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.801 [2024-12-13 12:23:51.277741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.277797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.277806] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:23.801 [2024-12-13 12:23:51.277811] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:23.801 [2024-12-13 12:23:51.277815] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:23.801 [2024-12-13 12:23:51.277819] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:23.801 [2024-12-13 12:23:51.277823] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:23.801 [2024-12-13 12:23:51.277827] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:23.801 [2024-12-13 12:23:51.277832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.277866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.277876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.801 [2024-12-13 12:23:51.277883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.801 [2024-12-13 12:23:51.277890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.801 [2024-12-13 12:23:51.277897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.801 [2024-12-13 12:23:51.277901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.277926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.277931] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:23.801 [2024-12-13 12:23:51.277936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.277955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.277969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.278017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278031] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:23.801 [2024-12-13 12:23:51.278036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:23.801 [2024-12-13 12:23:51.278039] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.801 [2024-12-13 12:23:51.278044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.278062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.278070] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:23.801 [2024-12-13 12:23:51.278082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278095] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:23.801 [2024-12-13 12:23:51.278099] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.801 [2024-12-13 12:23:51.278102] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.801 [2024-12-13 12:23:51.278107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.278136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.278147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278160] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:23.801 [2024-12-13 12:23:51.278164] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.801 [2024-12-13 12:23:51.278166] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.801 [2024-12-13 12:23:51.278172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.801 [2024-12-13 12:23:51.278183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:23.801 [2024-12-13 12:23:51.278190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278220] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:23.801 [2024-12-13 12:23:51.278224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:23.801 [2024-12-13 12:23:51.278229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:23.802 [2024-12-13 12:23:51.278246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278325] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:23.802 [2024-12-13 12:23:51.278329] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:23.802 [2024-12-13 12:23:51.278332] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:23.802 [2024-12-13 12:23:51.278335] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:23.802 [2024-12-13 12:23:51.278338] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:23.802 [2024-12-13 12:23:51.278343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:23.802 [2024-12-13 12:23:51.278350] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:23.802 [2024-12-13 12:23:51.278354] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:23.802 [2024-12-13 12:23:51.278357] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.802 [2024-12-13 12:23:51.278362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278368] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:23.802 [2024-12-13 12:23:51.278371] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:23.802 [2024-12-13 12:23:51.278374] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.802 [2024-12-13 12:23:51.278380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278386] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:23.802 [2024-12-13 12:23:51.278390] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:23.802 [2024-12-13 12:23:51.278393] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:23.802 [2024-12-13 12:23:51.278398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:23.802 [2024-12-13 12:23:51.278404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:23.802 [2024-12-13 12:23:51.278431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:23.802 ===================================================== 00:18:23.802 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.802 ===================================================== 00:18:23.802 Controller Capabilities/Features 00:18:23.802 ================================ 00:18:23.802 Vendor ID: 4e58 00:18:23.802 Subsystem Vendor ID: 4e58 00:18:23.802 Serial Number: SPDK1 00:18:23.802 Model Number: SPDK bdev Controller 00:18:23.802 Firmware Version: 25.01 00:18:23.802 Recommended Arb Burst: 6 00:18:23.802 IEEE OUI Identifier: 8d 6b 50 00:18:23.802 Multi-path I/O 00:18:23.802 May have multiple subsystem ports: Yes 00:18:23.802 May have multiple controllers: Yes 00:18:23.802 Associated with SR-IOV VF: No 00:18:23.802 Max Data Transfer Size: 131072 00:18:23.802 Max Number of Namespaces: 32 00:18:23.802 Max Number of I/O Queues: 127 00:18:23.802 NVMe Specification Version (VS): 1.3 00:18:23.802 NVMe Specification Version (Identify): 1.3 00:18:23.802 Maximum Queue Entries: 256 00:18:23.802 Contiguous Queues Required: Yes 00:18:23.802 Arbitration Mechanisms Supported 00:18:23.802 Weighted Round Robin: Not Supported 00:18:23.802 Vendor Specific: Not Supported 00:18:23.802 Reset Timeout: 15000 ms 00:18:23.802 Doorbell Stride: 4 bytes 00:18:23.802 NVM Subsystem Reset: Not Supported 00:18:23.802 Command Sets Supported 00:18:23.802 NVM Command Set: Supported 00:18:23.802 Boot Partition: Not Supported 00:18:23.802 Memory Page Size Minimum: 4096 bytes 00:18:23.802 Memory Page Size Maximum: 4096 bytes 00:18:23.802 Persistent Memory Region: Not Supported 00:18:23.802 Optional Asynchronous Events Supported 00:18:23.802 Namespace Attribute Notices: Supported 00:18:23.802 Firmware Activation Notices: Not Supported 00:18:23.802 ANA Change Notices: Not Supported 00:18:23.802 PLE Aggregate Log Change Notices: Not Supported 00:18:23.802 LBA Status Info Alert Notices: Not Supported 00:18:23.802 EGE Aggregate Log Change Notices: Not Supported 00:18:23.802 Normal NVM Subsystem Shutdown event: Not Supported 00:18:23.802 Zone Descriptor Change Notices: Not Supported 00:18:23.802 Discovery Log Change Notices: Not Supported 00:18:23.802 Controller Attributes 00:18:23.802 128-bit Host Identifier: Supported 00:18:23.802 Non-Operational Permissive Mode: Not Supported 00:18:23.802 NVM Sets: Not Supported 00:18:23.802 Read Recovery Levels: Not Supported 00:18:23.802 Endurance Groups: Not Supported 00:18:23.802 Predictable Latency Mode: Not Supported 00:18:23.802 Traffic Based Keep ALive: Not Supported 00:18:23.802 Namespace Granularity: Not Supported 00:18:23.802 SQ Associations: Not Supported 00:18:23.802 UUID List: Not Supported 00:18:23.802 Multi-Domain Subsystem: Not Supported 00:18:23.802 Fixed Capacity Management: Not Supported 00:18:23.802 Variable Capacity Management: Not Supported 00:18:23.802 Delete Endurance Group: Not Supported 00:18:23.802 Delete NVM Set: Not Supported 00:18:23.802 Extended LBA Formats Supported: Not Supported 00:18:23.802 Flexible Data Placement Supported: Not Supported 00:18:23.802 00:18:23.802 Controller Memory Buffer Support 00:18:23.802 ================================ 00:18:23.802 Supported: No 00:18:23.802 00:18:23.802 Persistent Memory Region Support 00:18:23.802 ================================ 00:18:23.802 Supported: No 00:18:23.802 00:18:23.802 Admin Command Set Attributes 00:18:23.802 ============================ 00:18:23.802 Security Send/Receive: Not Supported 00:18:23.802 Format NVM: Not Supported 00:18:23.802 Firmware Activate/Download: Not Supported 00:18:23.802 Namespace Management: Not Supported 00:18:23.802 Device Self-Test: Not Supported 00:18:23.802 Directives: Not Supported 00:18:23.802 NVMe-MI: Not Supported 00:18:23.802 Virtualization Management: Not Supported 00:18:23.802 Doorbell Buffer Config: Not Supported 00:18:23.802 Get LBA Status Capability: Not Supported 00:18:23.802 Command & Feature Lockdown Capability: Not Supported 00:18:23.802 Abort Command Limit: 4 00:18:23.802 Async Event Request Limit: 4 00:18:23.802 Number of Firmware Slots: N/A 00:18:23.802 Firmware Slot 1 Read-Only: N/A 00:18:23.802 Firmware Activation Without Reset: N/A 00:18:23.802 Multiple Update Detection Support: N/A 00:18:23.802 Firmware Update Granularity: No Information Provided 00:18:23.802 Per-Namespace SMART Log: No 00:18:23.802 Asymmetric Namespace Access Log Page: Not Supported 00:18:23.802 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:23.802 Command Effects Log Page: Supported 00:18:23.802 Get Log Page Extended Data: Supported 00:18:23.802 Telemetry Log Pages: Not Supported 00:18:23.802 Persistent Event Log Pages: Not Supported 00:18:23.802 Supported Log Pages Log Page: May Support 00:18:23.802 Commands Supported & Effects Log Page: Not Supported 00:18:23.802 Feature Identifiers & Effects Log Page:May Support 00:18:23.802 NVMe-MI Commands & Effects Log Page: May Support 00:18:23.802 Data Area 4 for Telemetry Log: Not Supported 00:18:23.802 Error Log Page Entries Supported: 128 00:18:23.802 Keep Alive: Supported 00:18:23.802 Keep Alive Granularity: 10000 ms 00:18:23.802 00:18:23.802 NVM Command Set Attributes 00:18:23.802 ========================== 00:18:23.802 Submission Queue Entry Size 00:18:23.802 Max: 64 00:18:23.802 Min: 64 00:18:23.802 Completion Queue Entry Size 00:18:23.802 Max: 16 00:18:23.802 Min: 16 00:18:23.802 Number of Namespaces: 32 00:18:23.802 Compare Command: Supported 00:18:23.802 Write Uncorrectable Command: Not Supported 00:18:23.802 Dataset Management Command: Supported 00:18:23.802 Write Zeroes Command: Supported 00:18:23.802 Set Features Save Field: Not Supported 00:18:23.802 Reservations: Not Supported 00:18:23.802 Timestamp: Not Supported 00:18:23.802 Copy: Supported 00:18:23.802 Volatile Write Cache: Present 00:18:23.802 Atomic Write Unit (Normal): 1 00:18:23.802 Atomic Write Unit (PFail): 1 00:18:23.802 Atomic Compare & Write Unit: 1 00:18:23.802 Fused Compare & Write: Supported 00:18:23.802 Scatter-Gather List 00:18:23.803 SGL Command Set: Supported (Dword aligned) 00:18:23.803 SGL Keyed: Not Supported 00:18:23.803 SGL Bit Bucket Descriptor: Not Supported 00:18:23.803 SGL Metadata Pointer: Not Supported 00:18:23.803 Oversized SGL: Not Supported 00:18:23.803 SGL Metadata Address: Not Supported 00:18:23.803 SGL Offset: Not Supported 00:18:23.803 Transport SGL Data Block: Not Supported 00:18:23.803 Replay Protected Memory Block: Not Supported 00:18:23.803 00:18:23.803 Firmware Slot Information 00:18:23.803 ========================= 00:18:23.803 Active slot: 1 00:18:23.803 Slot 1 Firmware Revision: 25.01 00:18:23.803 00:18:23.803 00:18:23.803 Commands Supported and Effects 00:18:23.803 ============================== 00:18:23.803 Admin Commands 00:18:23.803 -------------- 00:18:23.803 Get Log Page (02h): Supported 00:18:23.803 Identify (06h): Supported 00:18:23.803 Abort (08h): Supported 00:18:23.803 Set Features (09h): Supported 00:18:23.803 Get Features (0Ah): Supported 00:18:23.803 Asynchronous Event Request (0Ch): Supported 00:18:23.803 Keep Alive (18h): Supported 00:18:23.803 I/O Commands 00:18:23.803 ------------ 00:18:23.803 Flush (00h): Supported LBA-Change 00:18:23.803 Write (01h): Supported LBA-Change 00:18:23.803 Read (02h): Supported 00:18:23.803 Compare (05h): Supported 00:18:23.803 Write Zeroes (08h): Supported LBA-Change 00:18:23.803 Dataset Management (09h): Supported LBA-Change 00:18:23.803 Copy (19h): Supported LBA-Change 00:18:23.803 00:18:23.803 Error Log 00:18:23.803 ========= 00:18:23.803 00:18:23.803 Arbitration 00:18:23.803 =========== 00:18:23.803 Arbitration Burst: 1 00:18:23.803 00:18:23.803 Power Management 00:18:23.803 ================ 00:18:23.803 Number of Power States: 1 00:18:23.803 Current Power State: Power State #0 00:18:23.803 Power State #0: 00:18:23.803 Max Power: 0.00 W 00:18:23.803 Non-Operational State: Operational 00:18:23.803 Entry Latency: Not Reported 00:18:23.803 Exit Latency: Not Reported 00:18:23.803 Relative Read Throughput: 0 00:18:23.803 Relative Read Latency: 0 00:18:23.803 Relative Write Throughput: 0 00:18:23.803 Relative Write Latency: 0 00:18:23.803 Idle Power: Not Reported 00:18:23.803 Active Power: Not Reported 00:18:23.803 Non-Operational Permissive Mode: Not Supported 00:18:23.803 00:18:23.803 Health Information 00:18:23.803 ================== 00:18:23.803 Critical Warnings: 00:18:23.803 Available Spare Space: OK 00:18:23.803 Temperature: OK 00:18:23.803 Device Reliability: OK 00:18:23.803 Read Only: No 00:18:23.803 Volatile Memory Backup: OK 00:18:23.803 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:23.803 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:23.803 Available Spare: 0% 00:18:23.803 Available Sp[2024-12-13 12:23:51.278514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:23.803 [2024-12-13 12:23:51.278527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:23.803 [2024-12-13 12:23:51.278551] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:23.803 [2024-12-13 12:23:51.278559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.803 [2024-12-13 12:23:51.278565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.803 [2024-12-13 12:23:51.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.803 [2024-12-13 12:23:51.278575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.803 [2024-12-13 12:23:51.278680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:23.803 [2024-12-13 12:23:51.278689] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:23.803 [2024-12-13 12:23:51.279691] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.803 [2024-12-13 12:23:51.279740] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:23.803 [2024-12-13 12:23:51.279746] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:23.803 [2024-12-13 12:23:51.280694] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:23.803 [2024-12-13 12:23:51.280704] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:23.803 [2024-12-13 12:23:51.280753] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:23.803 [2024-12-13 12:23:51.282790] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:23.803 are Threshold: 0% 00:18:23.803 Life Percentage Used: 0% 00:18:23.803 Data Units Read: 0 00:18:23.803 Data Units Written: 0 00:18:23.803 Host Read Commands: 0 00:18:23.803 Host Write Commands: 0 00:18:23.803 Controller Busy Time: 0 minutes 00:18:23.803 Power Cycles: 0 00:18:23.803 Power On Hours: 0 hours 00:18:23.803 Unsafe Shutdowns: 0 00:18:23.803 Unrecoverable Media Errors: 0 00:18:23.803 Lifetime Error Log Entries: 0 00:18:23.803 Warning Temperature Time: 0 minutes 00:18:23.803 Critical Temperature Time: 0 minutes 00:18:23.803 00:18:23.803 Number of Queues 00:18:23.803 ================ 00:18:23.803 Number of I/O Submission Queues: 127 00:18:23.803 Number of I/O Completion Queues: 127 00:18:23.803 00:18:23.803 Active Namespaces 00:18:23.803 ================= 00:18:23.803 Namespace ID:1 00:18:23.803 Error Recovery Timeout: Unlimited 00:18:23.803 Command Set Identifier: NVM (00h) 00:18:23.803 Deallocate: Supported 00:18:23.803 Deallocated/Unwritten Error: Not Supported 00:18:23.803 Deallocated Read Value: Unknown 00:18:23.803 Deallocate in Write Zeroes: Not Supported 00:18:23.803 Deallocated Guard Field: 0xFFFF 00:18:23.803 Flush: Supported 00:18:23.803 Reservation: Supported 00:18:23.803 Namespace Sharing Capabilities: Multiple Controllers 00:18:23.803 Size (in LBAs): 131072 (0GiB) 00:18:23.803 Capacity (in LBAs): 131072 (0GiB) 00:18:23.803 Utilization (in LBAs): 131072 (0GiB) 00:18:23.803 NGUID: 15A65DC82AB041868157FD873C2A1D18 00:18:23.803 UUID: 15a65dc8-2ab0-4186-8157-fd873c2a1d18 00:18:23.803 Thin Provisioning: Not Supported 00:18:23.803 Per-NS Atomic Units: Yes 00:18:23.803 Atomic Boundary Size (Normal): 0 00:18:23.803 Atomic Boundary Size (PFail): 0 00:18:23.803 Atomic Boundary Offset: 0 00:18:23.803 Maximum Single Source Range Length: 65535 00:18:23.803 Maximum Copy Length: 65535 00:18:23.803 Maximum Source Range Count: 1 00:18:23.803 NGUID/EUI64 Never Reused: No 00:18:23.803 Namespace Write Protected: No 00:18:23.803 Number of LBA Formats: 1 00:18:23.803 Current LBA Format: LBA Format #00 00:18:23.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:23.803 00:18:23.803 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:24.062 [2024-12-13 12:23:51.511746] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.338 Initializing NVMe Controllers 00:18:29.338 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:29.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:29.338 Initialization complete. Launching workers. 00:18:29.338 ======================================================== 00:18:29.338 Latency(us) 00:18:29.338 Device Information : IOPS MiB/s Average min max 00:18:29.338 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39938.76 156.01 3205.38 973.04 10369.23 00:18:29.338 ======================================================== 00:18:29.338 Total : 39938.76 156.01 3205.38 973.04 10369.23 00:18:29.338 00:18:29.338 [2024-12-13 12:23:56.530141] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.338 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:29.338 [2024-12-13 12:23:56.769302] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.614 Initializing NVMe Controllers 00:18:34.614 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:34.614 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:34.614 Initialization complete. Launching workers. 00:18:34.614 ======================================================== 00:18:34.614 Latency(us) 00:18:34.614 Device Information : IOPS MiB/s Average min max 00:18:34.614 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.89 62.70 7980.02 7778.99 8032.15 00:18:34.614 ======================================================== 00:18:34.614 Total : 16050.89 62.70 7980.02 7778.99 8032.15 00:18:34.614 00:18:34.614 [2024-12-13 12:24:01.811803] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.614 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:34.614 [2024-12-13 12:24:02.026801] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:39.890 [2024-12-13 12:24:07.133207] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:39.890 Initializing NVMe Controllers 00:18:39.890 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:39.890 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:39.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:39.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:39.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:39.891 Initialization complete. Launching workers. 00:18:39.891 Starting thread on core 2 00:18:39.891 Starting thread on core 3 00:18:39.891 Starting thread on core 1 00:18:39.891 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:39.891 [2024-12-13 12:24:07.432232] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:43.180 [2024-12-13 12:24:10.494787] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:43.180 Initializing NVMe Controllers 00:18:43.180 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:43.180 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:43.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:43.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:43.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:43.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:43.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:43.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:43.180 Initialization complete. Launching workers. 00:18:43.180 Starting thread on core 1 with urgent priority queue 00:18:43.180 Starting thread on core 2 with urgent priority queue 00:18:43.180 Starting thread on core 3 with urgent priority queue 00:18:43.180 Starting thread on core 0 with urgent priority queue 00:18:43.180 SPDK bdev Controller (SPDK1 ) core 0: 8202.00 IO/s 12.19 secs/100000 ios 00:18:43.180 SPDK bdev Controller (SPDK1 ) core 1: 8629.00 IO/s 11.59 secs/100000 ios 00:18:43.180 SPDK bdev Controller (SPDK1 ) core 2: 8479.67 IO/s 11.79 secs/100000 ios 00:18:43.180 SPDK bdev Controller (SPDK1 ) core 3: 9814.33 IO/s 10.19 secs/100000 ios 00:18:43.180 ======================================================== 00:18:43.180 00:18:43.180 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:43.180 [2024-12-13 12:24:10.775243] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:43.180 Initializing NVMe Controllers 00:18:43.180 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:43.180 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:43.180 Namespace ID: 1 size: 0GB 00:18:43.180 Initialization complete. 00:18:43.180 INFO: using host memory buffer for IO 00:18:43.180 Hello world! 00:18:43.180 [2024-12-13 12:24:10.806448] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:43.181 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:43.440 [2024-12-13 12:24:11.077940] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:44.819 Initializing NVMe Controllers 00:18:44.819 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:44.819 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:44.819 Initialization complete. Launching workers. 00:18:44.819 submit (in ns) avg, min, max = 7071.1, 3153.3, 4002095.2 00:18:44.819 complete (in ns) avg, min, max = 20858.1, 1747.6, 3998980.0 00:18:44.819 00:18:44.819 Submit histogram 00:18:44.819 ================ 00:18:44.819 Range in us Cumulative Count 00:18:44.819 3.139 - 3.154: 0.0061% ( 1) 00:18:44.819 3.154 - 3.170: 0.0429% ( 6) 00:18:44.819 3.170 - 3.185: 0.0981% ( 9) 00:18:44.819 3.185 - 3.200: 0.1287% ( 5) 00:18:44.819 3.200 - 3.215: 0.3985% ( 44) 00:18:44.819 3.215 - 3.230: 1.9556% ( 254) 00:18:44.819 3.230 - 3.246: 6.5044% ( 742) 00:18:44.819 3.246 - 3.261: 12.1260% ( 917) 00:18:44.819 3.261 - 3.276: 18.5201% ( 1043) 00:18:44.819 3.276 - 3.291: 26.0299% ( 1225) 00:18:44.819 3.291 - 3.307: 33.1229% ( 1157) 00:18:44.819 3.307 - 3.322: 38.4134% ( 863) 00:18:44.819 3.322 - 3.337: 43.4465% ( 821) 00:18:44.819 3.337 - 3.352: 48.5471% ( 832) 00:18:44.819 3.352 - 3.368: 52.3418% ( 619) 00:18:44.819 3.368 - 3.383: 56.1795% ( 626) 00:18:44.819 3.383 - 3.398: 63.3767% ( 1174) 00:18:44.819 3.398 - 3.413: 68.1094% ( 772) 00:18:44.819 3.413 - 3.429: 74.0130% ( 963) 00:18:44.819 3.429 - 3.444: 79.4568% ( 888) 00:18:44.819 3.444 - 3.459: 83.0309% ( 583) 00:18:44.819 3.459 - 3.474: 85.1030% ( 338) 00:18:44.819 3.474 - 3.490: 86.2433% ( 186) 00:18:44.819 3.490 - 3.505: 86.7950% ( 90) 00:18:44.819 3.505 - 3.520: 87.3590% ( 92) 00:18:44.819 3.520 - 3.535: 87.9475% ( 96) 00:18:44.819 3.535 - 3.550: 88.6709% ( 118) 00:18:44.819 3.550 - 3.566: 89.5169% ( 138) 00:18:44.819 3.566 - 3.581: 90.5530% ( 169) 00:18:44.819 3.581 - 3.596: 91.3622% ( 132) 00:18:44.819 3.596 - 3.611: 92.2572% ( 146) 00:18:44.819 3.611 - 3.627: 93.2197% ( 157) 00:18:44.819 3.627 - 3.642: 94.2742% ( 172) 00:18:44.819 3.642 - 3.657: 95.2979% ( 167) 00:18:44.819 3.657 - 3.672: 96.1256% ( 135) 00:18:44.819 3.672 - 3.688: 96.8980% ( 126) 00:18:44.819 3.688 - 3.703: 97.5601% ( 108) 00:18:44.819 3.703 - 3.718: 97.9708% ( 67) 00:18:44.819 3.718 - 3.733: 98.4490% ( 78) 00:18:44.819 3.733 - 3.749: 98.7433% ( 48) 00:18:44.819 3.749 - 3.764: 99.0069% ( 43) 00:18:44.819 3.764 - 3.779: 99.1663% ( 26) 00:18:44.819 3.779 - 3.794: 99.2643% ( 16) 00:18:44.819 3.794 - 3.810: 99.3747% ( 18) 00:18:44.819 3.810 - 3.825: 99.4483% ( 12) 00:18:44.819 3.825 - 3.840: 99.4728% ( 4) 00:18:44.819 3.886 - 3.901: 99.4789% ( 1) 00:18:44.819 3.931 - 3.962: 99.4850% ( 1) 00:18:44.819 5.029 - 5.059: 99.4912% ( 1) 00:18:44.819 5.211 - 5.242: 99.5034% ( 2) 00:18:44.819 5.242 - 5.272: 99.5157% ( 2) 00:18:44.819 5.333 - 5.364: 99.5218% ( 1) 00:18:44.819 5.394 - 5.425: 99.5280% ( 1) 00:18:44.819 5.425 - 5.455: 99.5341% ( 1) 00:18:44.819 5.455 - 5.486: 99.5402% ( 1) 00:18:44.819 5.516 - 5.547: 99.5463% ( 1) 00:18:44.819 5.608 - 5.638: 99.5586% ( 2) 00:18:44.819 5.638 - 5.669: 99.5709% ( 2) 00:18:44.819 5.851 - 5.882: 99.5770% ( 1) 00:18:44.819 5.912 - 5.943: 99.5893% ( 2) 00:18:44.819 5.943 - 5.973: 99.5954% ( 1) 00:18:44.819 6.034 - 6.065: 99.6015% ( 1) 00:18:44.819 6.187 - 6.217: 99.6077% ( 1) 00:18:44.819 6.370 - 6.400: 99.6138% ( 1) 00:18:44.819 6.400 - 6.430: 99.6199% ( 1) 00:18:44.819 6.796 - 6.827: 99.6260% ( 1) 00:18:44.819 6.827 - 6.857: 99.6322% ( 1) 00:18:44.819 6.857 - 6.888: 99.6444% ( 2) 00:18:44.819 6.888 - 6.918: 99.6506% ( 1) 00:18:44.819 6.918 - 6.949: 99.6567% ( 1) 00:18:44.819 6.949 - 6.979: 99.6628% ( 1) 00:18:44.819 6.979 - 7.010: 99.6690% ( 1) 00:18:44.819 7.010 - 7.040: 99.6751% ( 1) 00:18:44.819 7.070 - 7.101: 99.6873% ( 2) 00:18:44.819 7.131 - 7.162: 99.6996% ( 2) 00:18:44.819 7.253 - 7.284: 99.7057% ( 1) 00:18:44.819 7.436 - 7.467: 99.7180% ( 2) 00:18:44.819 7.467 - 7.497: 99.7241% ( 1) 00:18:44.819 7.497 - 7.528: 99.7303% ( 1) 00:18:44.819 [2024-12-13 12:24:12.101914] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:44.819 7.589 - 7.619: 99.7425% ( 2) 00:18:44.819 7.650 - 7.680: 99.7487% ( 1) 00:18:44.819 7.680 - 7.710: 99.7548% ( 1) 00:18:44.819 7.771 - 7.802: 99.7670% ( 2) 00:18:44.819 7.802 - 7.863: 99.7793% ( 2) 00:18:44.819 7.985 - 8.046: 99.7977% ( 3) 00:18:44.819 8.046 - 8.107: 99.8100% ( 2) 00:18:44.819 8.107 - 8.168: 99.8161% ( 1) 00:18:44.819 8.290 - 8.350: 99.8283% ( 2) 00:18:44.819 8.350 - 8.411: 99.8345% ( 1) 00:18:44.819 8.472 - 8.533: 99.8406% ( 1) 00:18:44.820 8.533 - 8.594: 99.8467% ( 1) 00:18:44.820 8.655 - 8.716: 99.8529% ( 1) 00:18:44.820 8.716 - 8.777: 99.8651% ( 2) 00:18:44.820 8.777 - 8.838: 99.8713% ( 1) 00:18:44.820 8.899 - 8.960: 99.8774% ( 1) 00:18:44.820 9.387 - 9.448: 99.8835% ( 1) 00:18:44.820 10.057 - 10.118: 99.8897% ( 1) 00:18:44.820 10.971 - 11.032: 99.8958% ( 1) 00:18:44.820 13.775 - 13.836: 99.9019% ( 1) 00:18:44.820 40.716 - 40.960: 99.9080% ( 1) 00:18:44.820 3994.575 - 4025.783: 100.0000% ( 15) 00:18:44.820 00:18:44.820 Complete histogram 00:18:44.820 ================== 00:18:44.820 Range in us Cumulative Count 00:18:44.820 1.745 - 1.752: 0.0245% ( 4) 00:18:44.820 1.752 - 1.760: 0.0368% ( 2) 00:18:44.820 1.768 - 1.775: 0.1042% ( 11) 00:18:44.820 1.775 - 1.783: 0.4169% ( 51) 00:18:44.820 1.783 - 1.790: 1.2935% ( 143) 00:18:44.820 1.790 - 1.798: 2.7648% ( 240) 00:18:44.820 1.798 - 1.806: 4.1258% ( 222) 00:18:44.820 1.806 - 1.813: 5.2783% ( 188) 00:18:44.820 1.813 - 1.821: 6.9152% ( 267) 00:18:44.820 1.821 - 1.829: 13.3583% ( 1051) 00:18:44.820 1.829 - 1.836: 31.7680% ( 3003) 00:18:44.820 1.836 - 1.844: 57.5405% ( 4204) 00:18:44.820 1.844 - 1.851: 75.9564% ( 3004) 00:18:44.820 1.851 - 1.859: 85.4156% ( 1543) 00:18:44.820 1.859 - 1.867: 90.2771% ( 793) 00:18:44.820 1.867 - 1.874: 93.8021% ( 575) 00:18:44.820 1.874 - 1.882: 95.8926% ( 341) 00:18:44.820 1.882 - 1.890: 96.7325% ( 137) 00:18:44.820 1.890 - 1.897: 97.1739% ( 72) 00:18:44.820 1.897 - 1.905: 97.7011% ( 86) 00:18:44.820 1.905 - 1.912: 98.1179% ( 68) 00:18:44.820 1.912 - 1.920: 98.5287% ( 67) 00:18:44.820 1.920 - 1.928: 98.8659% ( 55) 00:18:44.820 1.928 - 1.935: 99.0498% ( 30) 00:18:44.820 1.935 - 1.943: 99.1846% ( 22) 00:18:44.820 1.943 - 1.950: 99.2337% ( 8) 00:18:44.820 1.950 - 1.966: 99.2766% ( 7) 00:18:44.820 1.966 - 1.981: 99.2889% ( 2) 00:18:44.820 1.981 - 1.996: 99.3011% ( 2) 00:18:44.820 1.996 - 2.011: 99.3195% ( 3) 00:18:44.820 2.011 - 2.027: 99.3256% ( 1) 00:18:44.820 2.027 - 2.042: 99.3379% ( 2) 00:18:44.820 2.088 - 2.103: 99.3440% ( 1) 00:18:44.820 2.210 - 2.225: 99.3563% ( 2) 00:18:44.820 2.225 - 2.240: 99.3624% ( 1) 00:18:44.820 2.286 - 2.301: 99.3686% ( 1) 00:18:44.820 2.301 - 2.316: 99.3747% ( 1) 00:18:44.820 3.459 - 3.474: 99.3808% ( 1) 00:18:44.820 3.810 - 3.825: 99.3870% ( 1) 00:18:44.820 3.901 - 3.931: 99.3992% ( 2) 00:18:44.820 3.931 - 3.962: 99.4053% ( 1) 00:18:44.820 4.602 - 4.632: 99.4115% ( 1) 00:18:44.820 4.998 - 5.029: 99.4237% ( 2) 00:18:44.820 5.120 - 5.150: 99.4299% ( 1) 00:18:44.820 5.211 - 5.242: 99.4360% ( 1) 00:18:44.820 5.516 - 5.547: 99.4421% ( 1) 00:18:44.820 5.638 - 5.669: 99.4483% ( 1) 00:18:44.820 6.004 - 6.034: 99.4544% ( 1) 00:18:44.820 6.095 - 6.126: 99.4605% ( 1) 00:18:44.820 6.187 - 6.217: 99.4728% ( 2) 00:18:44.820 6.491 - 6.522: 99.4789% ( 1) 00:18:44.820 6.888 - 6.918: 99.4850% ( 1) 00:18:44.820 7.680 - 7.710: 99.4912% ( 1) 00:18:44.820 7.802 - 7.863: 99.4973% ( 1) 00:18:44.820 8.107 - 8.168: 99.5034% ( 1) 00:18:44.820 9.082 - 9.143: 99.5096% ( 1) 00:18:44.820 11.825 - 11.886: 99.5157% ( 1) 00:18:44.820 17.432 - 17.554: 99.5218% ( 1) 00:18:44.820 2465.402 - 2481.006: 99.5280% ( 1) 00:18:44.820 3994.575 - 4025.783: 100.0000% ( 77) 00:18:44.820 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:44.820 [ 00:18:44.820 { 00:18:44.820 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:44.820 "subtype": "Discovery", 00:18:44.820 "listen_addresses": [], 00:18:44.820 "allow_any_host": true, 00:18:44.820 "hosts": [] 00:18:44.820 }, 00:18:44.820 { 00:18:44.820 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:44.820 "subtype": "NVMe", 00:18:44.820 "listen_addresses": [ 00:18:44.820 { 00:18:44.820 "trtype": "VFIOUSER", 00:18:44.820 "adrfam": "IPv4", 00:18:44.820 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:44.820 "trsvcid": "0" 00:18:44.820 } 00:18:44.820 ], 00:18:44.820 "allow_any_host": true, 00:18:44.820 "hosts": [], 00:18:44.820 "serial_number": "SPDK1", 00:18:44.820 "model_number": "SPDK bdev Controller", 00:18:44.820 "max_namespaces": 32, 00:18:44.820 "min_cntlid": 1, 00:18:44.820 "max_cntlid": 65519, 00:18:44.820 "namespaces": [ 00:18:44.820 { 00:18:44.820 "nsid": 1, 00:18:44.820 "bdev_name": "Malloc1", 00:18:44.820 "name": "Malloc1", 00:18:44.820 "nguid": "15A65DC82AB041868157FD873C2A1D18", 00:18:44.820 "uuid": "15a65dc8-2ab0-4186-8157-fd873c2a1d18" 00:18:44.820 } 00:18:44.820 ] 00:18:44.820 }, 00:18:44.820 { 00:18:44.820 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:44.820 "subtype": "NVMe", 00:18:44.820 "listen_addresses": [ 00:18:44.820 { 00:18:44.820 "trtype": "VFIOUSER", 00:18:44.820 "adrfam": "IPv4", 00:18:44.820 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:44.820 "trsvcid": "0" 00:18:44.820 } 00:18:44.820 ], 00:18:44.820 "allow_any_host": true, 00:18:44.820 "hosts": [], 00:18:44.820 "serial_number": "SPDK2", 00:18:44.820 "model_number": "SPDK bdev Controller", 00:18:44.820 "max_namespaces": 32, 00:18:44.820 "min_cntlid": 1, 00:18:44.820 "max_cntlid": 65519, 00:18:44.820 "namespaces": [ 00:18:44.820 { 00:18:44.820 "nsid": 1, 00:18:44.820 "bdev_name": "Malloc2", 00:18:44.820 "name": "Malloc2", 00:18:44.820 "nguid": "B1E8E17330164A358F32766BEADA18EE", 00:18:44.820 "uuid": "b1e8e173-3016-4a35-8f32-766beada18ee" 00:18:44.820 } 00:18:44.820 ] 00:18:44.820 } 00:18:44.820 ] 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=300482 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:44.820 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:44.820 [2024-12-13 12:24:12.493213] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:45.080 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.080 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:45.080 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:45.080 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:45.080 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:45.080 Malloc3 00:18:45.080 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:45.339 [2024-12-13 12:24:12.960646] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:45.339 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:45.339 Asynchronous Event Request test 00:18:45.339 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:45.339 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:45.339 Registering asynchronous event callbacks... 00:18:45.339 Starting namespace attribute notice tests for all controllers... 00:18:45.339 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:45.339 aer_cb - Changed Namespace 00:18:45.339 Cleaning up... 00:18:45.598 [ 00:18:45.598 { 00:18:45.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:45.598 "subtype": "Discovery", 00:18:45.598 "listen_addresses": [], 00:18:45.598 "allow_any_host": true, 00:18:45.598 "hosts": [] 00:18:45.598 }, 00:18:45.598 { 00:18:45.598 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:45.598 "subtype": "NVMe", 00:18:45.598 "listen_addresses": [ 00:18:45.598 { 00:18:45.599 "trtype": "VFIOUSER", 00:18:45.599 "adrfam": "IPv4", 00:18:45.599 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:45.599 "trsvcid": "0" 00:18:45.599 } 00:18:45.599 ], 00:18:45.599 "allow_any_host": true, 00:18:45.599 "hosts": [], 00:18:45.599 "serial_number": "SPDK1", 00:18:45.599 "model_number": "SPDK bdev Controller", 00:18:45.599 "max_namespaces": 32, 00:18:45.599 "min_cntlid": 1, 00:18:45.599 "max_cntlid": 65519, 00:18:45.599 "namespaces": [ 00:18:45.599 { 00:18:45.599 "nsid": 1, 00:18:45.599 "bdev_name": "Malloc1", 00:18:45.599 "name": "Malloc1", 00:18:45.599 "nguid": "15A65DC82AB041868157FD873C2A1D18", 00:18:45.599 "uuid": "15a65dc8-2ab0-4186-8157-fd873c2a1d18" 00:18:45.599 }, 00:18:45.599 { 00:18:45.599 "nsid": 2, 00:18:45.599 "bdev_name": "Malloc3", 00:18:45.599 "name": "Malloc3", 00:18:45.599 "nguid": "4BC7763DE8CA4E1D9AB6E58480BCC9B6", 00:18:45.599 "uuid": "4bc7763d-e8ca-4e1d-9ab6-e58480bcc9b6" 00:18:45.599 } 00:18:45.599 ] 00:18:45.599 }, 00:18:45.599 { 00:18:45.599 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:45.599 "subtype": "NVMe", 00:18:45.599 "listen_addresses": [ 00:18:45.599 { 00:18:45.599 "trtype": "VFIOUSER", 00:18:45.599 "adrfam": "IPv4", 00:18:45.599 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:45.599 "trsvcid": "0" 00:18:45.599 } 00:18:45.599 ], 00:18:45.599 "allow_any_host": true, 00:18:45.599 "hosts": [], 00:18:45.599 "serial_number": "SPDK2", 00:18:45.599 "model_number": "SPDK bdev Controller", 00:18:45.599 "max_namespaces": 32, 00:18:45.599 "min_cntlid": 1, 00:18:45.599 "max_cntlid": 65519, 00:18:45.599 "namespaces": [ 00:18:45.599 { 00:18:45.599 "nsid": 1, 00:18:45.599 "bdev_name": "Malloc2", 00:18:45.599 "name": "Malloc2", 00:18:45.599 "nguid": "B1E8E17330164A358F32766BEADA18EE", 00:18:45.599 "uuid": "b1e8e173-3016-4a35-8f32-766beada18ee" 00:18:45.599 } 00:18:45.599 ] 00:18:45.599 } 00:18:45.599 ] 00:18:45.599 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 300482 00:18:45.599 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:45.599 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:45.599 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:45.599 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:45.599 [2024-12-13 12:24:13.224293] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:45.599 [2024-12-13 12:24:13.224341] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300705 ] 00:18:45.599 [2024-12-13 12:24:13.263952] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:45.599 [2024-12-13 12:24:13.269209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:45.599 [2024-12-13 12:24:13.269227] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffa62b28000 00:18:45.599 [2024-12-13 12:24:13.270209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.271219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.272232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.273236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.274247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.275256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.276268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.277270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:45.599 [2024-12-13 12:24:13.278281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:45.599 [2024-12-13 12:24:13.278291] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffa61831000 00:18:45.599 [2024-12-13 12:24:13.279194] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:45.599 [2024-12-13 12:24:13.288489] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:45.599 [2024-12-13 12:24:13.288513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:45.599 [2024-12-13 12:24:13.293601] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:45.599 [2024-12-13 12:24:13.293638] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:45.599 [2024-12-13 12:24:13.293713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:45.599 [2024-12-13 12:24:13.293726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:45.599 [2024-12-13 12:24:13.293731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:45.599 [2024-12-13 12:24:13.294605] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:45.599 [2024-12-13 12:24:13.294615] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:45.599 [2024-12-13 12:24:13.294622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:45.599 [2024-12-13 12:24:13.295611] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:45.599 [2024-12-13 12:24:13.295619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:45.599 [2024-12-13 12:24:13.295627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:45.599 [2024-12-13 12:24:13.296619] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:45.599 [2024-12-13 12:24:13.296628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:45.599 [2024-12-13 12:24:13.297630] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:45.599 [2024-12-13 12:24:13.297639] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:45.599 [2024-12-13 12:24:13.297644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:45.599 [2024-12-13 12:24:13.297650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:45.599 [2024-12-13 12:24:13.297757] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:45.599 [2024-12-13 12:24:13.297761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:45.599 [2024-12-13 12:24:13.297766] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:45.860 [2024-12-13 12:24:13.298636] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:45.860 [2024-12-13 12:24:13.299642] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:45.860 [2024-12-13 12:24:13.300657] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:45.860 [2024-12-13 12:24:13.301656] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.860 [2024-12-13 12:24:13.301695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:45.860 [2024-12-13 12:24:13.302668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:45.860 [2024-12-13 12:24:13.302677] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:45.860 [2024-12-13 12:24:13.302682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:45.860 [2024-12-13 12:24:13.302698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:45.860 [2024-12-13 12:24:13.302705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:45.860 [2024-12-13 12:24:13.302714] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:45.860 [2024-12-13 12:24:13.302719] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:45.860 [2024-12-13 12:24:13.302722] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.860 [2024-12-13 12:24:13.302733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:45.860 [2024-12-13 12:24:13.310788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:45.860 [2024-12-13 12:24:13.310799] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:45.860 [2024-12-13 12:24:13.310803] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:45.860 [2024-12-13 12:24:13.310807] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:45.860 [2024-12-13 12:24:13.310814] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:45.860 [2024-12-13 12:24:13.310819] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:45.860 [2024-12-13 12:24:13.310823] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:45.860 [2024-12-13 12:24:13.310827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:45.860 [2024-12-13 12:24:13.310836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:45.860 [2024-12-13 12:24:13.310848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:45.860 [2024-12-13 12:24:13.318786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:45.860 [2024-12-13 12:24:13.318807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.860 [2024-12-13 12:24:13.318815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.860 [2024-12-13 12:24:13.318822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.860 [2024-12-13 12:24:13.318829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.860 [2024-12-13 12:24:13.318833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:45.860 [2024-12-13 12:24:13.318845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.318853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.326784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.326792] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:45.861 [2024-12-13 12:24:13.326796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.326803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.326808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.326816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.334787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.334839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.334849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.334856] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:45.861 [2024-12-13 12:24:13.334863] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:45.861 [2024-12-13 12:24:13.334866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.861 [2024-12-13 12:24:13.334872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.342785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.342795] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:45.861 [2024-12-13 12:24:13.342803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.342809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.342816] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:45.861 [2024-12-13 12:24:13.342820] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:45.861 [2024-12-13 12:24:13.342823] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.861 [2024-12-13 12:24:13.342828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.350786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.350798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.350805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.350811] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:45.861 [2024-12-13 12:24:13.350816] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:45.861 [2024-12-13 12:24:13.350819] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.861 [2024-12-13 12:24:13.350824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.358786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.358795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358827] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:45.861 [2024-12-13 12:24:13.358833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:45.861 [2024-12-13 12:24:13.358838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:45.861 [2024-12-13 12:24:13.358853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.364814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.364828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.374787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.374799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.382786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.382797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.390784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.390798] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:45.861 [2024-12-13 12:24:13.390803] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:45.861 [2024-12-13 12:24:13.390806] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:45.861 [2024-12-13 12:24:13.390809] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:45.861 [2024-12-13 12:24:13.390812] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:45.861 [2024-12-13 12:24:13.390818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:45.861 [2024-12-13 12:24:13.390824] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:45.861 [2024-12-13 12:24:13.390828] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:45.861 [2024-12-13 12:24:13.390831] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.861 [2024-12-13 12:24:13.390837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.390842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:45.861 [2024-12-13 12:24:13.390846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:45.861 [2024-12-13 12:24:13.390849] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.861 [2024-12-13 12:24:13.390854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.390861] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:45.861 [2024-12-13 12:24:13.390865] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:45.861 [2024-12-13 12:24:13.390868] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:45.861 [2024-12-13 12:24:13.390873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:45.861 [2024-12-13 12:24:13.398785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.398798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.398807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:45.861 [2024-12-13 12:24:13.398813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:45.861 ===================================================== 00:18:45.861 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:45.861 ===================================================== 00:18:45.861 Controller Capabilities/Features 00:18:45.861 ================================ 00:18:45.861 Vendor ID: 4e58 00:18:45.861 Subsystem Vendor ID: 4e58 00:18:45.861 Serial Number: SPDK2 00:18:45.861 Model Number: SPDK bdev Controller 00:18:45.861 Firmware Version: 25.01 00:18:45.861 Recommended Arb Burst: 6 00:18:45.861 IEEE OUI Identifier: 8d 6b 50 00:18:45.861 Multi-path I/O 00:18:45.861 May have multiple subsystem ports: Yes 00:18:45.861 May have multiple controllers: Yes 00:18:45.861 Associated with SR-IOV VF: No 00:18:45.861 Max Data Transfer Size: 131072 00:18:45.861 Max Number of Namespaces: 32 00:18:45.861 Max Number of I/O Queues: 127 00:18:45.861 NVMe Specification Version (VS): 1.3 00:18:45.861 NVMe Specification Version (Identify): 1.3 00:18:45.861 Maximum Queue Entries: 256 00:18:45.861 Contiguous Queues Required: Yes 00:18:45.861 Arbitration Mechanisms Supported 00:18:45.861 Weighted Round Robin: Not Supported 00:18:45.861 Vendor Specific: Not Supported 00:18:45.862 Reset Timeout: 15000 ms 00:18:45.862 Doorbell Stride: 4 bytes 00:18:45.862 NVM Subsystem Reset: Not Supported 00:18:45.862 Command Sets Supported 00:18:45.862 NVM Command Set: Supported 00:18:45.862 Boot Partition: Not Supported 00:18:45.862 Memory Page Size Minimum: 4096 bytes 00:18:45.862 Memory Page Size Maximum: 4096 bytes 00:18:45.862 Persistent Memory Region: Not Supported 00:18:45.862 Optional Asynchronous Events Supported 00:18:45.862 Namespace Attribute Notices: Supported 00:18:45.862 Firmware Activation Notices: Not Supported 00:18:45.862 ANA Change Notices: Not Supported 00:18:45.862 PLE Aggregate Log Change Notices: Not Supported 00:18:45.862 LBA Status Info Alert Notices: Not Supported 00:18:45.862 EGE Aggregate Log Change Notices: Not Supported 00:18:45.862 Normal NVM Subsystem Shutdown event: Not Supported 00:18:45.862 Zone Descriptor Change Notices: Not Supported 00:18:45.862 Discovery Log Change Notices: Not Supported 00:18:45.862 Controller Attributes 00:18:45.862 128-bit Host Identifier: Supported 00:18:45.862 Non-Operational Permissive Mode: Not Supported 00:18:45.862 NVM Sets: Not Supported 00:18:45.862 Read Recovery Levels: Not Supported 00:18:45.862 Endurance Groups: Not Supported 00:18:45.862 Predictable Latency Mode: Not Supported 00:18:45.862 Traffic Based Keep ALive: Not Supported 00:18:45.862 Namespace Granularity: Not Supported 00:18:45.862 SQ Associations: Not Supported 00:18:45.862 UUID List: Not Supported 00:18:45.862 Multi-Domain Subsystem: Not Supported 00:18:45.862 Fixed Capacity Management: Not Supported 00:18:45.862 Variable Capacity Management: Not Supported 00:18:45.862 Delete Endurance Group: Not Supported 00:18:45.862 Delete NVM Set: Not Supported 00:18:45.862 Extended LBA Formats Supported: Not Supported 00:18:45.862 Flexible Data Placement Supported: Not Supported 00:18:45.862 00:18:45.862 Controller Memory Buffer Support 00:18:45.862 ================================ 00:18:45.862 Supported: No 00:18:45.862 00:18:45.862 Persistent Memory Region Support 00:18:45.862 ================================ 00:18:45.862 Supported: No 00:18:45.862 00:18:45.862 Admin Command Set Attributes 00:18:45.862 ============================ 00:18:45.862 Security Send/Receive: Not Supported 00:18:45.862 Format NVM: Not Supported 00:18:45.862 Firmware Activate/Download: Not Supported 00:18:45.862 Namespace Management: Not Supported 00:18:45.862 Device Self-Test: Not Supported 00:18:45.862 Directives: Not Supported 00:18:45.862 NVMe-MI: Not Supported 00:18:45.862 Virtualization Management: Not Supported 00:18:45.862 Doorbell Buffer Config: Not Supported 00:18:45.862 Get LBA Status Capability: Not Supported 00:18:45.862 Command & Feature Lockdown Capability: Not Supported 00:18:45.862 Abort Command Limit: 4 00:18:45.862 Async Event Request Limit: 4 00:18:45.862 Number of Firmware Slots: N/A 00:18:45.862 Firmware Slot 1 Read-Only: N/A 00:18:45.862 Firmware Activation Without Reset: N/A 00:18:45.862 Multiple Update Detection Support: N/A 00:18:45.862 Firmware Update Granularity: No Information Provided 00:18:45.862 Per-Namespace SMART Log: No 00:18:45.862 Asymmetric Namespace Access Log Page: Not Supported 00:18:45.862 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:45.862 Command Effects Log Page: Supported 00:18:45.862 Get Log Page Extended Data: Supported 00:18:45.862 Telemetry Log Pages: Not Supported 00:18:45.862 Persistent Event Log Pages: Not Supported 00:18:45.862 Supported Log Pages Log Page: May Support 00:18:45.862 Commands Supported & Effects Log Page: Not Supported 00:18:45.862 Feature Identifiers & Effects Log Page:May Support 00:18:45.862 NVMe-MI Commands & Effects Log Page: May Support 00:18:45.862 Data Area 4 for Telemetry Log: Not Supported 00:18:45.862 Error Log Page Entries Supported: 128 00:18:45.862 Keep Alive: Supported 00:18:45.862 Keep Alive Granularity: 10000 ms 00:18:45.862 00:18:45.862 NVM Command Set Attributes 00:18:45.862 ========================== 00:18:45.862 Submission Queue Entry Size 00:18:45.862 Max: 64 00:18:45.862 Min: 64 00:18:45.862 Completion Queue Entry Size 00:18:45.862 Max: 16 00:18:45.862 Min: 16 00:18:45.862 Number of Namespaces: 32 00:18:45.862 Compare Command: Supported 00:18:45.862 Write Uncorrectable Command: Not Supported 00:18:45.862 Dataset Management Command: Supported 00:18:45.862 Write Zeroes Command: Supported 00:18:45.862 Set Features Save Field: Not Supported 00:18:45.862 Reservations: Not Supported 00:18:45.862 Timestamp: Not Supported 00:18:45.862 Copy: Supported 00:18:45.862 Volatile Write Cache: Present 00:18:45.862 Atomic Write Unit (Normal): 1 00:18:45.862 Atomic Write Unit (PFail): 1 00:18:45.862 Atomic Compare & Write Unit: 1 00:18:45.862 Fused Compare & Write: Supported 00:18:45.862 Scatter-Gather List 00:18:45.862 SGL Command Set: Supported (Dword aligned) 00:18:45.862 SGL Keyed: Not Supported 00:18:45.862 SGL Bit Bucket Descriptor: Not Supported 00:18:45.862 SGL Metadata Pointer: Not Supported 00:18:45.862 Oversized SGL: Not Supported 00:18:45.862 SGL Metadata Address: Not Supported 00:18:45.862 SGL Offset: Not Supported 00:18:45.862 Transport SGL Data Block: Not Supported 00:18:45.862 Replay Protected Memory Block: Not Supported 00:18:45.862 00:18:45.862 Firmware Slot Information 00:18:45.862 ========================= 00:18:45.862 Active slot: 1 00:18:45.862 Slot 1 Firmware Revision: 25.01 00:18:45.862 00:18:45.862 00:18:45.862 Commands Supported and Effects 00:18:45.862 ============================== 00:18:45.862 Admin Commands 00:18:45.862 -------------- 00:18:45.862 Get Log Page (02h): Supported 00:18:45.862 Identify (06h): Supported 00:18:45.862 Abort (08h): Supported 00:18:45.862 Set Features (09h): Supported 00:18:45.862 Get Features (0Ah): Supported 00:18:45.862 Asynchronous Event Request (0Ch): Supported 00:18:45.862 Keep Alive (18h): Supported 00:18:45.862 I/O Commands 00:18:45.862 ------------ 00:18:45.862 Flush (00h): Supported LBA-Change 00:18:45.862 Write (01h): Supported LBA-Change 00:18:45.862 Read (02h): Supported 00:18:45.862 Compare (05h): Supported 00:18:45.862 Write Zeroes (08h): Supported LBA-Change 00:18:45.862 Dataset Management (09h): Supported LBA-Change 00:18:45.862 Copy (19h): Supported LBA-Change 00:18:45.862 00:18:45.862 Error Log 00:18:45.862 ========= 00:18:45.862 00:18:45.862 Arbitration 00:18:45.862 =========== 00:18:45.862 Arbitration Burst: 1 00:18:45.862 00:18:45.862 Power Management 00:18:45.862 ================ 00:18:45.862 Number of Power States: 1 00:18:45.862 Current Power State: Power State #0 00:18:45.862 Power State #0: 00:18:45.862 Max Power: 0.00 W 00:18:45.862 Non-Operational State: Operational 00:18:45.862 Entry Latency: Not Reported 00:18:45.862 Exit Latency: Not Reported 00:18:45.862 Relative Read Throughput: 0 00:18:45.862 Relative Read Latency: 0 00:18:45.862 Relative Write Throughput: 0 00:18:45.862 Relative Write Latency: 0 00:18:45.862 Idle Power: Not Reported 00:18:45.862 Active Power: Not Reported 00:18:45.862 Non-Operational Permissive Mode: Not Supported 00:18:45.862 00:18:45.862 Health Information 00:18:45.862 ================== 00:18:45.862 Critical Warnings: 00:18:45.862 Available Spare Space: OK 00:18:45.862 Temperature: OK 00:18:45.862 Device Reliability: OK 00:18:45.862 Read Only: No 00:18:45.862 Volatile Memory Backup: OK 00:18:45.862 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:45.862 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:45.862 Available Spare: 0% 00:18:45.862 Available Sp[2024-12-13 12:24:13.398900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:45.862 [2024-12-13 12:24:13.406786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:45.862 [2024-12-13 12:24:13.406813] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:45.862 [2024-12-13 12:24:13.406821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.862 [2024-12-13 12:24:13.406827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.862 [2024-12-13 12:24:13.406833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.862 [2024-12-13 12:24:13.406838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.862 [2024-12-13 12:24:13.406886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:45.862 [2024-12-13 12:24:13.406896] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:45.862 [2024-12-13 12:24:13.407893] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.862 [2024-12-13 12:24:13.407935] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:45.862 [2024-12-13 12:24:13.407941] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:45.862 [2024-12-13 12:24:13.408901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:45.863 [2024-12-13 12:24:13.408912] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:45.863 [2024-12-13 12:24:13.408959] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:45.863 [2024-12-13 12:24:13.409916] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:45.863 are Threshold: 0% 00:18:45.863 Life Percentage Used: 0% 00:18:45.863 Data Units Read: 0 00:18:45.863 Data Units Written: 0 00:18:45.863 Host Read Commands: 0 00:18:45.863 Host Write Commands: 0 00:18:45.863 Controller Busy Time: 0 minutes 00:18:45.863 Power Cycles: 0 00:18:45.863 Power On Hours: 0 hours 00:18:45.863 Unsafe Shutdowns: 0 00:18:45.863 Unrecoverable Media Errors: 0 00:18:45.863 Lifetime Error Log Entries: 0 00:18:45.863 Warning Temperature Time: 0 minutes 00:18:45.863 Critical Temperature Time: 0 minutes 00:18:45.863 00:18:45.863 Number of Queues 00:18:45.863 ================ 00:18:45.863 Number of I/O Submission Queues: 127 00:18:45.863 Number of I/O Completion Queues: 127 00:18:45.863 00:18:45.863 Active Namespaces 00:18:45.863 ================= 00:18:45.863 Namespace ID:1 00:18:45.863 Error Recovery Timeout: Unlimited 00:18:45.863 Command Set Identifier: NVM (00h) 00:18:45.863 Deallocate: Supported 00:18:45.863 Deallocated/Unwritten Error: Not Supported 00:18:45.863 Deallocated Read Value: Unknown 00:18:45.863 Deallocate in Write Zeroes: Not Supported 00:18:45.863 Deallocated Guard Field: 0xFFFF 00:18:45.863 Flush: Supported 00:18:45.863 Reservation: Supported 00:18:45.863 Namespace Sharing Capabilities: Multiple Controllers 00:18:45.863 Size (in LBAs): 131072 (0GiB) 00:18:45.863 Capacity (in LBAs): 131072 (0GiB) 00:18:45.863 Utilization (in LBAs): 131072 (0GiB) 00:18:45.863 NGUID: B1E8E17330164A358F32766BEADA18EE 00:18:45.863 UUID: b1e8e173-3016-4a35-8f32-766beada18ee 00:18:45.863 Thin Provisioning: Not Supported 00:18:45.863 Per-NS Atomic Units: Yes 00:18:45.863 Atomic Boundary Size (Normal): 0 00:18:45.863 Atomic Boundary Size (PFail): 0 00:18:45.863 Atomic Boundary Offset: 0 00:18:45.863 Maximum Single Source Range Length: 65535 00:18:45.863 Maximum Copy Length: 65535 00:18:45.863 Maximum Source Range Count: 1 00:18:45.863 NGUID/EUI64 Never Reused: No 00:18:45.863 Namespace Write Protected: No 00:18:45.863 Number of LBA Formats: 1 00:18:45.863 Current LBA Format: LBA Format #00 00:18:45.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:45.863 00:18:45.863 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:46.122 [2024-12-13 12:24:13.634083] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:51.721 Initializing NVMe Controllers 00:18:51.721 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:51.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:51.721 Initialization complete. Launching workers. 00:18:51.721 ======================================================== 00:18:51.721 Latency(us) 00:18:51.721 Device Information : IOPS MiB/s Average min max 00:18:51.721 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39919.18 155.93 3206.79 970.84 10332.16 00:18:51.721 ======================================================== 00:18:51.721 Total : 39919.18 155.93 3206.79 970.84 10332.16 00:18:51.721 00:18:51.722 [2024-12-13 12:24:18.734050] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.722 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:51.722 [2024-12-13 12:24:18.969725] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.023 Initializing NVMe Controllers 00:18:57.023 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:57.023 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:57.023 Initialization complete. Launching workers. 00:18:57.023 ======================================================== 00:18:57.023 Latency(us) 00:18:57.023 Device Information : IOPS MiB/s Average min max 00:18:57.023 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39927.03 155.96 3205.69 971.84 7057.13 00:18:57.023 ======================================================== 00:18:57.023 Total : 39927.03 155.96 3205.69 971.84 7057.13 00:18:57.023 00:18:57.023 [2024-12-13 12:24:23.992068] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.023 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:57.023 [2024-12-13 12:24:24.195245] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:02.398 [2024-12-13 12:24:29.339882] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:02.398 Initializing NVMe Controllers 00:19:02.398 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:02.398 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:02.398 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:02.398 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:02.398 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:02.398 Initialization complete. Launching workers. 00:19:02.398 Starting thread on core 2 00:19:02.398 Starting thread on core 3 00:19:02.398 Starting thread on core 1 00:19:02.398 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:02.398 [2024-12-13 12:24:29.628483] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:05.721 [2024-12-13 12:24:32.710750] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:05.721 Initializing NVMe Controllers 00:19:05.721 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:05.721 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:05.721 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:05.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:05.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:05.722 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:05.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:05.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:05.722 Initialization complete. Launching workers. 00:19:05.722 Starting thread on core 1 with urgent priority queue 00:19:05.722 Starting thread on core 2 with urgent priority queue 00:19:05.722 Starting thread on core 3 with urgent priority queue 00:19:05.722 Starting thread on core 0 with urgent priority queue 00:19:05.722 SPDK bdev Controller (SPDK2 ) core 0: 8645.33 IO/s 11.57 secs/100000 ios 00:19:05.722 SPDK bdev Controller (SPDK2 ) core 1: 8749.67 IO/s 11.43 secs/100000 ios 00:19:05.722 SPDK bdev Controller (SPDK2 ) core 2: 7884.33 IO/s 12.68 secs/100000 ios 00:19:05.722 SPDK bdev Controller (SPDK2 ) core 3: 8278.33 IO/s 12.08 secs/100000 ios 00:19:05.722 ======================================================== 00:19:05.722 00:19:05.722 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:05.722 [2024-12-13 12:24:32.989195] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:05.722 Initializing NVMe Controllers 00:19:05.722 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:05.722 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:05.722 Namespace ID: 1 size: 0GB 00:19:05.722 Initialization complete. 00:19:05.722 INFO: using host memory buffer for IO 00:19:05.722 Hello world! 00:19:05.722 [2024-12-13 12:24:33.001288] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:05.722 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:05.722 [2024-12-13 12:24:33.277440] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:06.687 Initializing NVMe Controllers 00:19:06.687 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:06.687 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:06.687 Initialization complete. Launching workers. 00:19:06.687 submit (in ns) avg, min, max = 6548.9, 3125.7, 4001587.6 00:19:06.687 complete (in ns) avg, min, max = 20944.8, 1726.7, 4996856.2 00:19:06.687 00:19:06.687 Submit histogram 00:19:06.687 ================ 00:19:06.687 Range in us Cumulative Count 00:19:06.687 3.124 - 3.139: 0.0061% ( 1) 00:19:06.687 3.139 - 3.154: 0.0306% ( 4) 00:19:06.687 3.154 - 3.170: 0.0672% ( 6) 00:19:06.687 3.170 - 3.185: 0.1100% ( 7) 00:19:06.687 3.185 - 3.200: 0.3790% ( 44) 00:19:06.687 3.200 - 3.215: 1.9623% ( 259) 00:19:06.687 3.215 - 3.230: 6.4922% ( 741) 00:19:06.687 3.230 - 3.246: 12.2020% ( 934) 00:19:06.687 3.246 - 3.261: 18.1501% ( 973) 00:19:06.687 3.261 - 3.276: 25.7855% ( 1249) 00:19:06.687 3.276 - 3.291: 33.2437% ( 1220) 00:19:06.687 3.291 - 3.307: 39.2896% ( 989) 00:19:06.687 3.307 - 3.322: 44.6020% ( 869) 00:19:06.687 3.322 - 3.337: 49.3275% ( 773) 00:19:06.687 3.337 - 3.352: 53.1239% ( 621) 00:19:06.687 3.352 - 3.368: 57.4704% ( 711) 00:19:06.687 3.368 - 3.383: 65.0813% ( 1245) 00:19:06.687 3.383 - 3.398: 70.8155% ( 938) 00:19:06.687 3.398 - 3.413: 75.9384% ( 838) 00:19:06.687 3.413 - 3.429: 80.8840% ( 809) 00:19:06.687 3.429 - 3.444: 84.1973% ( 542) 00:19:06.687 3.444 - 3.459: 86.3308% ( 349) 00:19:06.687 3.459 - 3.474: 87.3090% ( 160) 00:19:06.687 3.474 - 3.490: 87.9081% ( 98) 00:19:06.687 3.490 - 3.505: 88.4338% ( 86) 00:19:06.687 3.505 - 3.520: 88.9656% ( 87) 00:19:06.687 3.520 - 3.535: 89.6931% ( 119) 00:19:06.687 3.535 - 3.550: 90.6346% ( 154) 00:19:06.687 3.550 - 3.566: 91.7227% ( 178) 00:19:06.687 3.566 - 3.581: 92.6152% ( 146) 00:19:06.687 3.581 - 3.596: 93.5444% ( 152) 00:19:06.687 3.596 - 3.611: 94.2291% ( 112) 00:19:06.687 3.611 - 3.627: 94.9138% ( 112) 00:19:06.687 3.627 - 3.642: 95.7758% ( 141) 00:19:06.687 3.642 - 3.657: 96.6194% ( 138) 00:19:06.687 3.657 - 3.672: 97.4019% ( 128) 00:19:06.687 3.672 - 3.688: 97.9093% ( 83) 00:19:06.687 3.688 - 3.703: 98.3861% ( 78) 00:19:06.687 3.703 - 3.718: 98.7223% ( 55) 00:19:06.687 3.718 - 3.733: 98.9730% ( 41) 00:19:06.687 3.733 - 3.749: 99.1747% ( 33) 00:19:06.687 3.749 - 3.764: 99.3275% ( 25) 00:19:06.687 3.764 - 3.779: 99.4682% ( 23) 00:19:06.687 3.779 - 3.794: 99.5293% ( 10) 00:19:06.687 3.794 - 3.810: 99.5660% ( 6) 00:19:06.687 3.810 - 3.825: 99.6026% ( 6) 00:19:06.687 3.840 - 3.855: 99.6088% ( 1) 00:19:06.687 3.870 - 3.886: 99.6149% ( 1) 00:19:06.687 4.084 - 4.114: 99.6210% ( 1) 00:19:06.687 4.937 - 4.968: 99.6271% ( 1) 00:19:06.687 5.333 - 5.364: 99.6332% ( 1) 00:19:06.687 5.364 - 5.394: 99.6393% ( 1) 00:19:06.687 5.547 - 5.577: 99.6515% ( 2) 00:19:06.687 5.577 - 5.608: 99.6577% ( 1) 00:19:06.687 5.669 - 5.699: 99.6638% ( 1) 00:19:06.687 5.699 - 5.730: 99.6699% ( 1) 00:19:06.687 5.730 - 5.760: 99.6760% ( 1) 00:19:06.687 5.882 - 5.912: 99.6821% ( 1) 00:19:06.687 6.156 - 6.187: 99.6882% ( 1) 00:19:06.687 6.400 - 6.430: 99.6943% ( 1) 00:19:06.687 6.491 - 6.522: 99.7005% ( 1) 00:19:06.687 6.644 - 6.674: 99.7066% ( 1) 00:19:06.687 6.735 - 6.766: 99.7127% ( 1) 00:19:06.687 6.766 - 6.796: 99.7188% ( 1) 00:19:06.687 6.857 - 6.888: 99.7249% ( 1) 00:19:06.687 6.979 - 7.010: 99.7310% ( 1) 00:19:06.687 7.101 - 7.131: 99.7371% ( 1) 00:19:06.687 7.284 - 7.314: 99.7432% ( 1) 00:19:06.687 7.406 - 7.436: 99.7494% ( 1) 00:19:06.687 7.497 - 7.528: 99.7555% ( 1) 00:19:06.687 7.558 - 7.589: 99.7616% ( 1) 00:19:06.687 7.680 - 7.710: 99.7677% ( 1) 00:19:06.687 7.741 - 7.771: 99.7738% ( 1) 00:19:06.687 7.802 - 7.863: 99.7799% ( 1) 00:19:06.687 7.863 - 7.924: 99.7860% ( 1) 00:19:06.687 8.046 - 8.107: 99.7922% ( 1) 00:19:06.687 8.107 - 8.168: 99.7983% ( 1) 00:19:06.687 8.168 - 8.229: 99.8105% ( 2) 00:19:06.687 8.290 - 8.350: 99.8227% ( 2) 00:19:06.687 8.533 - 8.594: 99.8288% ( 1) 00:19:06.687 8.716 - 8.777: 99.8349% ( 1) 00:19:06.687 8.838 - 8.899: 99.8411% ( 1) 00:19:06.687 9.326 - 9.387: 99.8533% ( 2) 00:19:06.687 9.387 - 9.448: 99.8594% ( 1) 00:19:06.687 9.509 - 9.570: 99.8900% ( 5) 00:19:06.687 9.691 - 9.752: 99.8961% ( 1) 00:19:06.687 9.752 - 9.813: 99.9022% ( 1) 00:19:06.687 9.813 - 9.874: 99.9083% ( 1) 00:19:06.687 13.044 - 13.105: 99.9144% ( 1) 00:19:06.687 19.261 - 19.383: 99.9205% ( 1) 00:19:06.687 3994.575 - 4025.783: 100.0000% ( 13) 00:19:06.687 00:19:06.687 Complete histogram 00:19:06.687 ================== 00:19:06.687 Range in us Cumulative Count 00:19:06.687 1.722 - 1.730: 0.0489% ( 8) 00:19:06.687 1.730 - 1.737: 0.1528% ( 17) 00:19:06.687 1.737 - 1.745: 0.2629% ( 18) 00:19:06.687 1.745 - 1.752: 0.2873% ( 4) 00:19:06.687 1.752 - 1.760: 0.3057% ( 3) 00:19:06.687 1.760 - 1.768: 0.4096% ( 17) 00:19:06.687 1.768 - 1.775: 2.1641% ( 287) 00:19:06.687 1.775 - 1.783: 13.0089% ( 1774) 00:19:06.687 1.783 - 1.790: 34.1790% ( 3463) 00:19:06.687 1.790 - 1.798: 51.0209% ( 2755) 00:19:06.687 1.798 - 1.806: 58.7541% ( 1265) 00:19:06.687 1.806 - 1.813: 62.2142% ( 566) 00:19:06.687 1.813 - 1.821: 64.2988% ( 341) 00:19:06.687 1.821 - 1.829: 65.7110% ( 231) 00:19:06.687 1.829 - 1.836: 69.3544% ( 596) 00:19:06.687 1.836 - 1.844: 77.7173% ( 1368) 00:19:06.688 1.844 - 1.851: 86.5509% ( 1445) 00:19:06.688 1.851 - 1.859: 91.9306% ( 880) 00:19:06.688 1.859 - 1.867: 94.7060% ( 454) 00:19:06.688 1.867 - 1.874: 96.3443% ( 268) 00:19:06.688 1.874 - 1.882: 97.2124% ( 142) 00:19:06.688 1.882 - 1.890: 97.6892% ( 78) 00:19:06.688 1.890 - 1.897: 97.9398% ( 41) 00:19:06.688 1.897 - 1.905: 98.1599% ( 36) 00:19:06.688 1.905 - 1.912: 98.4045% ( 40) 00:19:06.688 1.912 - 1.920: 98.6368% ( 38) 00:19:06.688 1.920 - 1.928: 98.8752% ( 39) 00:19:06.688 1.928 - 1.935: 98.9913% ( 19) 00:19:06.688 1.935 - 1.943: 99.0830% ( 15) 00:19:06.688 1.943 - 1.950: 99.1197% ( 6) 00:19:06.688 1.950 - 1.966: 99.2053% ( 14) 00:19:06.688 1.966 - 1.981: 99.2481% ( 7) 00:19:06.688 1.981 - 1.996: 99.2603% ( 2) 00:19:06.688 2.011 - 2.027: 99.2725% ( 2) 00:19:06.688 2.057 - 2.072: 99.2848% ( 2) 00:19:06.688 2.088 - 2.103: 99.2909% ( 1) 00:19:06.688 2.331 - 2.347: 99.2970% ( 1) 00:19:06.688 3.703 - 3.718: 99.3031% ( 1) 00:19:06.688 3.733 - 3.749: 99.3092% ( 1) 00:19:06.688 3.764 - 3.779: 99.3153% ( 1) 00:19:06.688 3.855 - 3.870: 99.3214% ( 1) 00:19:06.688 3.992 - 4.023: 99.3337% ( 2) 00:19:06.688 4.084 - 4.114: 99.3459% ( 2) 00:19:06.688 4.114 - 4.145: 99.3520% ( 1) 00:19:06.688 4.602 - 4.632: 99.3581% ( 1) 00:19:06.688 4.968 - 4.998: 99.3642% ( 1) 00:19:06.688 5.120 - 5.150: 99.3703% ( 1) 00:19:06.688 5.272 - 5.303: 99.3765% ( 1) 00:19:06.688 5.364 - 5.394: 99.3826% ( 1) 00:19:06.688 5.516 - 5.547: 99.3887% ( 1) 00:19:06.688 5.577 - 5.608: 99.3948% ( 1) 00:19:06.688 5.638 - 5.669: 99.4009% ( 1) 00:19:06.688 5.821 - 5.851: 99.4070% ( 1) 00:19:06.688 6.065 - 6.095: 99.4131% ( 1) 00:19:06.688 6.187 - 6.217: 99.4192% ( 1) 00:19:06.688 6.339 - 6.370: 99.4254% ( 1) 00:19:06.688 6.370 - 6.400: 99.4315% ( 1) 00:19:06.688 6.400 - 6.430: 99.4376% ( 1) 00:19:06.688 6.461 - 6.491: 99.4437% ( 1) 00:19:06.688 6.644 - 6.674: 99.4498% ( 1) 00:19:06.688 7.162 - 7.192: 99.4559% ( 1) 00:19:06.688 7.253 - 7.284: 99.4620% ( 1) 00:19:06.688 7.436 - 7.467: 99.4682% ( 1) 00:19:06.688 7.771 - 7.802: 99.4743% ( 1) 00:19:06.688 8.350 - 8.411: 99.4804% ( 1) 00:19:06.688 8.411 - 8.472: 99.4865% ( 1) 00:19:06.688 8.533 - 8.594: 99.4926% ( 1) 00:19:06.688 8.655 - 8.716: 99.4987% ( 1) 00:19:06.688 9.143 - 9.204: 99.5048% ( 1) 00:19:06.688 9.387 - 9.448: 99.5109% ( 1) 00:19:06.688 38.278 - 38.522: 99.5171% ( 1) 00:19:06.688 148.236 - 149.211: 99.5232% ( 1) 00:19:06.688 3011.535 - 3027.139: 99.5293% ( 1) 00:19:06.688 3994.575 - 4025.7[2024-12-13 12:24:34.372787] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:06.948 83: 99.9878% ( 75) 00:19:06.948 4962.011 - 4993.219: 99.9939% ( 1) 00:19:06.948 4993.219 - 5024.427: 100.0000% ( 1) 00:19:06.948 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:06.948 [ 00:19:06.948 { 00:19:06.948 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:06.948 "subtype": "Discovery", 00:19:06.948 "listen_addresses": [], 00:19:06.948 "allow_any_host": true, 00:19:06.948 "hosts": [] 00:19:06.948 }, 00:19:06.948 { 00:19:06.948 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:06.948 "subtype": "NVMe", 00:19:06.948 "listen_addresses": [ 00:19:06.948 { 00:19:06.948 "trtype": "VFIOUSER", 00:19:06.948 "adrfam": "IPv4", 00:19:06.948 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:06.948 "trsvcid": "0" 00:19:06.948 } 00:19:06.948 ], 00:19:06.948 "allow_any_host": true, 00:19:06.948 "hosts": [], 00:19:06.948 "serial_number": "SPDK1", 00:19:06.948 "model_number": "SPDK bdev Controller", 00:19:06.948 "max_namespaces": 32, 00:19:06.948 "min_cntlid": 1, 00:19:06.948 "max_cntlid": 65519, 00:19:06.948 "namespaces": [ 00:19:06.948 { 00:19:06.948 "nsid": 1, 00:19:06.948 "bdev_name": "Malloc1", 00:19:06.948 "name": "Malloc1", 00:19:06.948 "nguid": "15A65DC82AB041868157FD873C2A1D18", 00:19:06.948 "uuid": "15a65dc8-2ab0-4186-8157-fd873c2a1d18" 00:19:06.948 }, 00:19:06.948 { 00:19:06.948 "nsid": 2, 00:19:06.948 "bdev_name": "Malloc3", 00:19:06.948 "name": "Malloc3", 00:19:06.948 "nguid": "4BC7763DE8CA4E1D9AB6E58480BCC9B6", 00:19:06.948 "uuid": "4bc7763d-e8ca-4e1d-9ab6-e58480bcc9b6" 00:19:06.948 } 00:19:06.948 ] 00:19:06.948 }, 00:19:06.948 { 00:19:06.948 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:06.948 "subtype": "NVMe", 00:19:06.948 "listen_addresses": [ 00:19:06.948 { 00:19:06.948 "trtype": "VFIOUSER", 00:19:06.948 "adrfam": "IPv4", 00:19:06.948 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:06.948 "trsvcid": "0" 00:19:06.948 } 00:19:06.948 ], 00:19:06.948 "allow_any_host": true, 00:19:06.948 "hosts": [], 00:19:06.948 "serial_number": "SPDK2", 00:19:06.948 "model_number": "SPDK bdev Controller", 00:19:06.948 "max_namespaces": 32, 00:19:06.948 "min_cntlid": 1, 00:19:06.948 "max_cntlid": 65519, 00:19:06.948 "namespaces": [ 00:19:06.948 { 00:19:06.948 "nsid": 1, 00:19:06.948 "bdev_name": "Malloc2", 00:19:06.948 "name": "Malloc2", 00:19:06.948 "nguid": "B1E8E17330164A358F32766BEADA18EE", 00:19:06.948 "uuid": "b1e8e173-3016-4a35-8f32-766beada18ee" 00:19:06.948 } 00:19:06.948 ] 00:19:06.948 } 00:19:06.948 ] 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=304102 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:06.948 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:07.208 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:07.208 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:07.208 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:07.209 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:07.209 [2024-12-13 12:24:34.764213] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.209 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:07.209 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:07.209 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:07.209 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:07.209 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:07.471 Malloc4 00:19:07.471 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:07.731 [2024-12-13 12:24:35.241750] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.731 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:07.731 Asynchronous Event Request test 00:19:07.731 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:07.731 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:07.731 Registering asynchronous event callbacks... 00:19:07.731 Starting namespace attribute notice tests for all controllers... 00:19:07.731 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:07.731 aer_cb - Changed Namespace 00:19:07.731 Cleaning up... 00:19:07.992 [ 00:19:07.992 { 00:19:07.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:07.992 "subtype": "Discovery", 00:19:07.992 "listen_addresses": [], 00:19:07.992 "allow_any_host": true, 00:19:07.992 "hosts": [] 00:19:07.992 }, 00:19:07.992 { 00:19:07.992 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:07.992 "subtype": "NVMe", 00:19:07.992 "listen_addresses": [ 00:19:07.992 { 00:19:07.992 "trtype": "VFIOUSER", 00:19:07.992 "adrfam": "IPv4", 00:19:07.992 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:07.992 "trsvcid": "0" 00:19:07.992 } 00:19:07.992 ], 00:19:07.992 "allow_any_host": true, 00:19:07.992 "hosts": [], 00:19:07.992 "serial_number": "SPDK1", 00:19:07.992 "model_number": "SPDK bdev Controller", 00:19:07.992 "max_namespaces": 32, 00:19:07.992 "min_cntlid": 1, 00:19:07.992 "max_cntlid": 65519, 00:19:07.992 "namespaces": [ 00:19:07.992 { 00:19:07.992 "nsid": 1, 00:19:07.992 "bdev_name": "Malloc1", 00:19:07.992 "name": "Malloc1", 00:19:07.992 "nguid": "15A65DC82AB041868157FD873C2A1D18", 00:19:07.992 "uuid": "15a65dc8-2ab0-4186-8157-fd873c2a1d18" 00:19:07.992 }, 00:19:07.992 { 00:19:07.992 "nsid": 2, 00:19:07.992 "bdev_name": "Malloc3", 00:19:07.992 "name": "Malloc3", 00:19:07.992 "nguid": "4BC7763DE8CA4E1D9AB6E58480BCC9B6", 00:19:07.992 "uuid": "4bc7763d-e8ca-4e1d-9ab6-e58480bcc9b6" 00:19:07.992 } 00:19:07.992 ] 00:19:07.992 }, 00:19:07.992 { 00:19:07.992 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:07.992 "subtype": "NVMe", 00:19:07.992 "listen_addresses": [ 00:19:07.992 { 00:19:07.992 "trtype": "VFIOUSER", 00:19:07.992 "adrfam": "IPv4", 00:19:07.992 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:07.992 "trsvcid": "0" 00:19:07.992 } 00:19:07.992 ], 00:19:07.992 "allow_any_host": true, 00:19:07.992 "hosts": [], 00:19:07.992 "serial_number": "SPDK2", 00:19:07.992 "model_number": "SPDK bdev Controller", 00:19:07.992 "max_namespaces": 32, 00:19:07.992 "min_cntlid": 1, 00:19:07.992 "max_cntlid": 65519, 00:19:07.992 "namespaces": [ 00:19:07.992 { 00:19:07.992 "nsid": 1, 00:19:07.992 "bdev_name": "Malloc2", 00:19:07.992 "name": "Malloc2", 00:19:07.992 "nguid": "B1E8E17330164A358F32766BEADA18EE", 00:19:07.992 "uuid": "b1e8e173-3016-4a35-8f32-766beada18ee" 00:19:07.992 }, 00:19:07.992 { 00:19:07.992 "nsid": 2, 00:19:07.992 "bdev_name": "Malloc4", 00:19:07.992 "name": "Malloc4", 00:19:07.992 "nguid": "4616CAD0C9BA4EF7BC7DC8E2095EC1B3", 00:19:07.992 "uuid": "4616cad0-c9ba-4ef7-bc7d-c8e2095ec1b3" 00:19:07.992 } 00:19:07.992 ] 00:19:07.992 } 00:19:07.992 ] 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 304102 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296021 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296021 ']' 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296021 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296021 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296021' 00:19:07.992 killing process with pid 296021 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296021 00:19:07.992 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296021 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=304341 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 304341' 00:19:08.253 Process pid: 304341 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 304341 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 304341 ']' 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.253 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:08.253 [2024-12-13 12:24:35.806742] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:08.253 [2024-12-13 12:24:35.807575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:08.253 [2024-12-13 12:24:35.807610] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.253 [2024-12-13 12:24:35.881175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.253 [2024-12-13 12:24:35.900543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.253 [2024-12-13 12:24:35.900584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.253 [2024-12-13 12:24:35.900597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.253 [2024-12-13 12:24:35.900603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.253 [2024-12-13 12:24:35.900607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.253 [2024-12-13 12:24:35.902028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.253 [2024-12-13 12:24:35.902136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.253 [2024-12-13 12:24:35.902242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.253 [2024-12-13 12:24:35.902243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.513 [2024-12-13 12:24:35.965483] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:08.513 [2024-12-13 12:24:35.966330] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:08.513 [2024-12-13 12:24:35.966525] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:08.513 [2024-12-13 12:24:35.966979] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:08.513 [2024-12-13 12:24:35.967004] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:08.513 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.513 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:08.513 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:09.457 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:09.723 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:09.723 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:09.724 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:09.724 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:09.724 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:09.724 Malloc1 00:19:09.988 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:09.988 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:10.256 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:10.535 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:10.535 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:10.535 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:10.535 Malloc2 00:19:10.817 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:10.817 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:11.099 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 304341 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 304341 ']' 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 304341 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304341 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304341' 00:19:11.372 killing process with pid 304341 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 304341 00:19:11.372 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 304341 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:11.638 00:19:11.638 real 0m51.273s 00:19:11.638 user 3m18.491s 00:19:11.638 sys 0m3.368s 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:11.638 ************************************ 00:19:11.638 END TEST nvmf_vfio_user 00:19:11.638 ************************************ 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:11.638 ************************************ 00:19:11.638 START TEST nvmf_vfio_user_nvme_compliance 00:19:11.638 ************************************ 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:11.638 * Looking for test storage... 00:19:11.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.638 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:11.639 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:11.639 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.639 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.903 --rc genhtml_branch_coverage=1 00:19:11.903 --rc genhtml_function_coverage=1 00:19:11.903 --rc genhtml_legend=1 00:19:11.903 --rc geninfo_all_blocks=1 00:19:11.903 --rc geninfo_unexecuted_blocks=1 00:19:11.903 00:19:11.903 ' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.903 --rc genhtml_branch_coverage=1 00:19:11.903 --rc genhtml_function_coverage=1 00:19:11.903 --rc genhtml_legend=1 00:19:11.903 --rc geninfo_all_blocks=1 00:19:11.903 --rc geninfo_unexecuted_blocks=1 00:19:11.903 00:19:11.903 ' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.903 --rc genhtml_branch_coverage=1 00:19:11.903 --rc genhtml_function_coverage=1 00:19:11.903 --rc genhtml_legend=1 00:19:11.903 --rc geninfo_all_blocks=1 00:19:11.903 --rc geninfo_unexecuted_blocks=1 00:19:11.903 00:19:11.903 ' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.903 --rc genhtml_branch_coverage=1 00:19:11.903 --rc genhtml_function_coverage=1 00:19:11.903 --rc genhtml_legend=1 00:19:11.903 --rc geninfo_all_blocks=1 00:19:11.903 --rc geninfo_unexecuted_blocks=1 00:19:11.903 00:19:11.903 ' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:11.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=305085 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 305085' 00:19:11.903 Process pid: 305085 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 305085 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 305085 ']' 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.903 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:11.904 [2024-12-13 12:24:39.432169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:11.904 [2024-12-13 12:24:39.432214] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.904 [2024-12-13 12:24:39.505280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:11.904 [2024-12-13 12:24:39.527824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.904 [2024-12-13 12:24:39.527863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.904 [2024-12-13 12:24:39.527870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.904 [2024-12-13 12:24:39.527877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.904 [2024-12-13 12:24:39.527881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.904 [2024-12-13 12:24:39.529054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.904 [2024-12-13 12:24:39.529160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.904 [2024-12-13 12:24:39.529161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.166 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.166 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:12.166 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.129 malloc0 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.129 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:13.129 00:19:13.129 00:19:13.129 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.129 http://cunit.sourceforge.net/ 00:19:13.129 00:19:13.129 00:19:13.129 Suite: nvme_compliance 00:19:13.401 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-13 12:24:40.864240] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.401 [2024-12-13 12:24:40.865577] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:13.401 [2024-12-13 12:24:40.865593] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:13.401 [2024-12-13 12:24:40.865598] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:13.402 [2024-12-13 12:24:40.867265] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.402 passed 00:19:13.402 Test: admin_identify_ctrlr_verify_fused ...[2024-12-13 12:24:40.943796] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.402 [2024-12-13 12:24:40.946813] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.402 passed 00:19:13.402 Test: admin_identify_ns ...[2024-12-13 12:24:41.029895] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.402 [2024-12-13 12:24:41.087790] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:13.402 [2024-12-13 12:24:41.095795] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:13.676 [2024-12-13 12:24:41.116873] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.676 passed 00:19:13.676 Test: admin_get_features_mandatory_features ...[2024-12-13 12:24:41.193742] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.676 [2024-12-13 12:24:41.198784] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.676 passed 00:19:13.676 Test: admin_get_features_optional_features ...[2024-12-13 12:24:41.274294] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.677 [2024-12-13 12:24:41.277322] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.677 passed 00:19:13.677 Test: admin_set_features_number_of_queues ...[2024-12-13 12:24:41.355049] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.949 [2024-12-13 12:24:41.460916] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.949 passed 00:19:13.949 Test: admin_get_log_page_mandatory_logs ...[2024-12-13 12:24:41.536371] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:13.949 [2024-12-13 12:24:41.539395] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:13.949 passed 00:19:13.949 Test: admin_get_log_page_with_lpo ...[2024-12-13 12:24:41.618209] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.221 [2024-12-13 12:24:41.685800] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:14.222 [2024-12-13 12:24:41.698836] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.222 passed 00:19:14.222 Test: fabric_property_get ...[2024-12-13 12:24:41.772436] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.222 [2024-12-13 12:24:41.773663] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:14.222 [2024-12-13 12:24:41.776462] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.222 passed 00:19:14.222 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-13 12:24:41.854003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.222 [2024-12-13 12:24:41.855242] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:14.222 [2024-12-13 12:24:41.857022] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.222 passed 00:19:14.491 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-13 12:24:41.933741] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.491 [2024-12-13 12:24:42.014793] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:14.491 [2024-12-13 12:24:42.030793] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:14.491 [2024-12-13 12:24:42.035871] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.491 passed 00:19:14.491 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-13 12:24:42.111193] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.491 [2024-12-13 12:24:42.112418] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:14.491 [2024-12-13 12:24:42.114217] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.491 passed 00:19:14.759 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-13 12:24:42.191834] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.759 [2024-12-13 12:24:42.268799] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:14.759 [2024-12-13 12:24:42.292795] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:14.759 [2024-12-13 12:24:42.297868] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.759 passed 00:19:14.759 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-13 12:24:42.373493] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:14.759 [2024-12-13 12:24:42.374721] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:14.759 [2024-12-13 12:24:42.374744] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:14.759 [2024-12-13 12:24:42.376514] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:14.759 passed 00:19:14.759 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-13 12:24:42.456165] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.027 [2024-12-13 12:24:42.548792] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:15.027 [2024-12-13 12:24:42.556798] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:15.027 [2024-12-13 12:24:42.564792] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:15.027 [2024-12-13 12:24:42.572789] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:15.027 [2024-12-13 12:24:42.601881] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.027 passed 00:19:15.027 Test: admin_create_io_sq_verify_pc ...[2024-12-13 12:24:42.675986] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.027 [2024-12-13 12:24:42.691797] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:15.027 [2024-12-13 12:24:42.709166] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.296 passed 00:19:15.296 Test: admin_create_io_qp_max_qps ...[2024-12-13 12:24:42.791705] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.252 [2024-12-13 12:24:43.890792] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:16.872 [2024-12-13 12:24:44.282243] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.872 passed 00:19:16.872 Test: admin_create_io_sq_shared_cq ...[2024-12-13 12:24:44.356914] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.872 [2024-12-13 12:24:44.489794] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:16.872 [2024-12-13 12:24:44.526848] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.872 passed 00:19:16.872 00:19:16.872 Run Summary: Type Total Ran Passed Failed Inactive 00:19:16.872 suites 1 1 n/a 0 0 00:19:16.872 tests 18 18 18 0 0 00:19:16.872 asserts 360 360 360 0 n/a 00:19:16.872 00:19:16.872 Elapsed time = 1.508 seconds 00:19:16.872 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 305085 00:19:16.872 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 305085 ']' 00:19:16.872 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 305085 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305085 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305085' 00:19:17.149 killing process with pid 305085 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 305085 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 305085 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:17.149 00:19:17.149 real 0m5.628s 00:19:17.149 user 0m15.769s 00:19:17.149 sys 0m0.509s 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:17.149 ************************************ 00:19:17.149 END TEST nvmf_vfio_user_nvme_compliance 00:19:17.149 ************************************ 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.149 12:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.440 ************************************ 00:19:17.440 START TEST nvmf_vfio_user_fuzz 00:19:17.440 ************************************ 00:19:17.440 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:17.440 * Looking for test storage... 00:19:17.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.440 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.440 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.440 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.440 --rc genhtml_branch_coverage=1 00:19:17.440 --rc genhtml_function_coverage=1 00:19:17.440 --rc genhtml_legend=1 00:19:17.440 --rc geninfo_all_blocks=1 00:19:17.440 --rc geninfo_unexecuted_blocks=1 00:19:17.440 00:19:17.440 ' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.440 --rc genhtml_branch_coverage=1 00:19:17.440 --rc genhtml_function_coverage=1 00:19:17.440 --rc genhtml_legend=1 00:19:17.440 --rc geninfo_all_blocks=1 00:19:17.440 --rc geninfo_unexecuted_blocks=1 00:19:17.440 00:19:17.440 ' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.440 --rc genhtml_branch_coverage=1 00:19:17.440 --rc genhtml_function_coverage=1 00:19:17.440 --rc genhtml_legend=1 00:19:17.440 --rc geninfo_all_blocks=1 00:19:17.440 --rc geninfo_unexecuted_blocks=1 00:19:17.440 00:19:17.440 ' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.440 --rc genhtml_branch_coverage=1 00:19:17.440 --rc genhtml_function_coverage=1 00:19:17.440 --rc genhtml_legend=1 00:19:17.440 --rc geninfo_all_blocks=1 00:19:17.440 --rc geninfo_unexecuted_blocks=1 00:19:17.440 00:19:17.440 ' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.440 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=306076 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 306076' 00:19:17.441 Process pid: 306076 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 306076 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 306076 ']' 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.441 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:17.731 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.731 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:17.731 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:18.759 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:18.759 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.759 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.759 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.760 malloc0 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:18.760 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:51.043 Fuzzing completed. Shutting down the fuzz application 00:19:51.043 00:19:51.043 Dumping successful admin opcodes: 00:19:51.043 9, 10, 00:19:51.043 Dumping successful io opcodes: 00:19:51.043 0, 00:19:51.043 NS: 0x20000081ef00 I/O qp, Total commands completed: 1001168, total successful commands: 3918, random_seed: 2024699456 00:19:51.043 NS: 0x20000081ef00 admin qp, Total commands completed: 243504, total successful commands: 57, random_seed: 3533633536 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 306076 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 306076 ']' 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 306076 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306076 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306076' 00:19:51.043 killing process with pid 306076 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 306076 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 306076 00:19:51.043 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:51.043 00:19:51.043 real 0m32.158s 00:19:51.043 user 0m29.535s 00:19:51.043 sys 0m31.481s 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:51.043 ************************************ 00:19:51.043 END TEST nvmf_vfio_user_fuzz 00:19:51.043 ************************************ 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.043 ************************************ 00:19:51.043 START TEST nvmf_auth_target 00:19:51.043 ************************************ 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:51.043 * Looking for test storage... 00:19:51.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.043 --rc genhtml_branch_coverage=1 00:19:51.043 --rc genhtml_function_coverage=1 00:19:51.043 --rc genhtml_legend=1 00:19:51.043 --rc geninfo_all_blocks=1 00:19:51.043 --rc geninfo_unexecuted_blocks=1 00:19:51.043 00:19:51.043 ' 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.043 --rc genhtml_branch_coverage=1 00:19:51.043 --rc genhtml_function_coverage=1 00:19:51.043 --rc genhtml_legend=1 00:19:51.043 --rc geninfo_all_blocks=1 00:19:51.043 --rc geninfo_unexecuted_blocks=1 00:19:51.043 00:19:51.043 ' 00:19:51.043 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.043 --rc genhtml_branch_coverage=1 00:19:51.043 --rc genhtml_function_coverage=1 00:19:51.043 --rc genhtml_legend=1 00:19:51.043 --rc geninfo_all_blocks=1 00:19:51.043 --rc geninfo_unexecuted_blocks=1 00:19:51.043 00:19:51.043 ' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:51.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.044 --rc genhtml_branch_coverage=1 00:19:51.044 --rc genhtml_function_coverage=1 00:19:51.044 --rc genhtml_legend=1 00:19:51.044 --rc geninfo_all_blocks=1 00:19:51.044 --rc geninfo_unexecuted_blocks=1 00:19:51.044 00:19:51.044 ' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.044 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.237 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.237 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:55.237 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:55.237 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:55.237 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:55.237 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:55.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:55.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.238 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:55.497 Found net devices under 0000:af:00.0: cvl_0_0 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.497 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:55.498 Found net devices under 0000:af:00.1: cvl_0_1 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.498 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:55.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:19:55.498 00:19:55.498 --- 10.0.0.2 ping statistics --- 00:19:55.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.498 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:19:55.498 00:19:55.498 --- 10.0.0.1 ping statistics --- 00:19:55.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.498 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:55.498 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=314196 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 314196 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314196 ']' 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.757 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=314285 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb183defd8aff5dff52e73b192070f6f9b933c5d1de29dd8 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.h0t 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb183defd8aff5dff52e73b192070f6f9b933c5d1de29dd8 0 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb183defd8aff5dff52e73b192070f6f9b933c5d1de29dd8 0 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb183defd8aff5dff52e73b192070f6f9b933c5d1de29dd8 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.h0t 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.h0t 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.h0t 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2651d3da6f3c9ee52c96d8ee370ccdfff8ef57dea5d5856bc0351e203fac6064 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Wjo 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2651d3da6f3c9ee52c96d8ee370ccdfff8ef57dea5d5856bc0351e203fac6064 3 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2651d3da6f3c9ee52c96d8ee370ccdfff8ef57dea5d5856bc0351e203fac6064 3 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2651d3da6f3c9ee52c96d8ee370ccdfff8ef57dea5d5856bc0351e203fac6064 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Wjo 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Wjo 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Wjo 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=408f8afda7021a143bb16c7aedc2765e 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YiK 00:19:56.017 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 408f8afda7021a143bb16c7aedc2765e 1 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 408f8afda7021a143bb16c7aedc2765e 1 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=408f8afda7021a143bb16c7aedc2765e 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YiK 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YiK 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.YiK 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=46663e2c254f6fa4d47f6005ed766c3c50dc62a2bb0ca243 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Hg 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 46663e2c254f6fa4d47f6005ed766c3c50dc62a2bb0ca243 2 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 46663e2c254f6fa4d47f6005ed766c3c50dc62a2bb0ca243 2 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=46663e2c254f6fa4d47f6005ed766c3c50dc62a2bb0ca243 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:56.018 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Hg 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Hg 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.0Hg 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4a24192be7e7c8034ad01602209e4ef4354a689c162310bf 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yW0 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4a24192be7e7c8034ad01602209e4ef4354a689c162310bf 2 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4a24192be7e7c8034ad01602209e4ef4354a689c162310bf 2 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4a24192be7e7c8034ad01602209e4ef4354a689c162310bf 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yW0 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yW0 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yW0 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:56.277 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=18ad5466d045ef85004a8ce4ea40b648 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ezd 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 18ad5466d045ef85004a8ce4ea40b648 1 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 18ad5466d045ef85004a8ce4ea40b648 1 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=18ad5466d045ef85004a8ce4ea40b648 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ezd 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ezd 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Ezd 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e6f118eff7eb837d2162c6779151a88c7f6cfc68a9049fd7f9649f55d754b6c 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Plq 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e6f118eff7eb837d2162c6779151a88c7f6cfc68a9049fd7f9649f55d754b6c 3 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e6f118eff7eb837d2162c6779151a88c7f6cfc68a9049fd7f9649f55d754b6c 3 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e6f118eff7eb837d2162c6779151a88c7f6cfc68a9049fd7f9649f55d754b6c 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Plq 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Plq 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Plq 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 314196 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314196 ']' 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.278 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 314285 /var/tmp/host.sock 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314285 ']' 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:56.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.537 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h0t 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.h0t 00:19:56.796 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.h0t 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Wjo ]] 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wjo 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wjo 00:19:57.054 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wjo 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YiK 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YiK 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YiK 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.0Hg ]] 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Hg 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.313 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.314 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Hg 00:19:57.314 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Hg 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yW0 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yW0 00:19:57.579 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yW0 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Ezd ]] 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezd 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezd 00:19:57.840 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezd 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Plq 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Plq 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Plq 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:58.098 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.357 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.616 00:19:58.616 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.616 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.616 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.875 { 00:19:58.875 "cntlid": 1, 00:19:58.875 "qid": 0, 00:19:58.875 "state": "enabled", 00:19:58.875 "thread": "nvmf_tgt_poll_group_000", 00:19:58.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.875 "listen_address": { 00:19:58.875 "trtype": "TCP", 00:19:58.875 "adrfam": "IPv4", 00:19:58.875 "traddr": "10.0.0.2", 00:19:58.875 "trsvcid": "4420" 00:19:58.875 }, 00:19:58.875 "peer_address": { 00:19:58.875 "trtype": "TCP", 00:19:58.875 "adrfam": "IPv4", 00:19:58.875 "traddr": "10.0.0.1", 00:19:58.875 "trsvcid": "38556" 00:19:58.875 }, 00:19:58.875 "auth": { 00:19:58.875 "state": "completed", 00:19:58.875 "digest": "sha256", 00:19:58.875 "dhgroup": "null" 00:19:58.875 } 00:19:58.875 } 00:19:58.875 ]' 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.875 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.134 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:19:59.134 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:02.422 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.682 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.941 00:20:02.941 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.941 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.941 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.200 { 00:20:03.200 "cntlid": 3, 00:20:03.200 "qid": 0, 00:20:03.200 "state": "enabled", 00:20:03.200 "thread": "nvmf_tgt_poll_group_000", 00:20:03.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.200 "listen_address": { 00:20:03.200 "trtype": "TCP", 00:20:03.200 "adrfam": "IPv4", 00:20:03.200 "traddr": "10.0.0.2", 00:20:03.200 "trsvcid": "4420" 00:20:03.200 }, 00:20:03.200 "peer_address": { 00:20:03.200 "trtype": "TCP", 00:20:03.200 "adrfam": "IPv4", 00:20:03.200 "traddr": "10.0.0.1", 00:20:03.200 "trsvcid": "36042" 00:20:03.200 }, 00:20:03.200 "auth": { 00:20:03.200 "state": "completed", 00:20:03.200 "digest": "sha256", 00:20:03.200 "dhgroup": "null" 00:20:03.200 } 00:20:03.200 } 00:20:03.200 ]' 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.200 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.459 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.459 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.459 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.459 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:03.459 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.027 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.286 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.545 00:20:04.545 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.545 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.545 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.804 { 00:20:04.804 "cntlid": 5, 00:20:04.804 "qid": 0, 00:20:04.804 "state": "enabled", 00:20:04.804 "thread": "nvmf_tgt_poll_group_000", 00:20:04.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.804 "listen_address": { 00:20:04.804 "trtype": "TCP", 00:20:04.804 "adrfam": "IPv4", 00:20:04.804 "traddr": "10.0.0.2", 00:20:04.804 "trsvcid": "4420" 00:20:04.804 }, 00:20:04.804 "peer_address": { 00:20:04.804 "trtype": "TCP", 00:20:04.804 "adrfam": "IPv4", 00:20:04.804 "traddr": "10.0.0.1", 00:20:04.804 "trsvcid": "36068" 00:20:04.804 }, 00:20:04.804 "auth": { 00:20:04.804 "state": "completed", 00:20:04.804 "digest": "sha256", 00:20:04.804 "dhgroup": "null" 00:20:04.804 } 00:20:04.804 } 00:20:04.804 ]' 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.804 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.063 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.063 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.063 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.063 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:05.063 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.630 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.895 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.154 00:20:06.154 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.154 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.154 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.413 { 00:20:06.413 "cntlid": 7, 00:20:06.413 "qid": 0, 00:20:06.413 "state": "enabled", 00:20:06.413 "thread": "nvmf_tgt_poll_group_000", 00:20:06.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.413 "listen_address": { 00:20:06.413 "trtype": "TCP", 00:20:06.413 "adrfam": "IPv4", 00:20:06.413 "traddr": "10.0.0.2", 00:20:06.413 "trsvcid": "4420" 00:20:06.413 }, 00:20:06.413 "peer_address": { 00:20:06.413 "trtype": "TCP", 00:20:06.413 "adrfam": "IPv4", 00:20:06.413 "traddr": "10.0.0.1", 00:20:06.413 "trsvcid": "36108" 00:20:06.413 }, 00:20:06.413 "auth": { 00:20:06.413 "state": "completed", 00:20:06.413 "digest": "sha256", 00:20:06.413 "dhgroup": "null" 00:20:06.413 } 00:20:06.413 } 00:20:06.413 ]' 00:20:06.413 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.413 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.413 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.413 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.413 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.672 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.672 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.672 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.672 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:06.672 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.240 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.498 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.756 00:20:07.756 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.756 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.756 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.014 { 00:20:08.014 "cntlid": 9, 00:20:08.014 "qid": 0, 00:20:08.014 "state": "enabled", 00:20:08.014 "thread": "nvmf_tgt_poll_group_000", 00:20:08.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.014 "listen_address": { 00:20:08.014 "trtype": "TCP", 00:20:08.014 "adrfam": "IPv4", 00:20:08.014 "traddr": "10.0.0.2", 00:20:08.014 "trsvcid": "4420" 00:20:08.014 }, 00:20:08.014 "peer_address": { 00:20:08.014 "trtype": "TCP", 00:20:08.014 "adrfam": "IPv4", 00:20:08.014 "traddr": "10.0.0.1", 00:20:08.014 "trsvcid": "36140" 00:20:08.014 }, 00:20:08.014 "auth": { 00:20:08.014 "state": "completed", 00:20:08.014 "digest": "sha256", 00:20:08.014 "dhgroup": "ffdhe2048" 00:20:08.014 } 00:20:08.014 } 00:20:08.014 ]' 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.014 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.271 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:08.271 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:08.839 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.098 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.099 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.357 00:20:09.357 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.357 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.357 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.616 { 00:20:09.616 "cntlid": 11, 00:20:09.616 "qid": 0, 00:20:09.616 "state": "enabled", 00:20:09.616 "thread": "nvmf_tgt_poll_group_000", 00:20:09.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.616 "listen_address": { 00:20:09.616 "trtype": "TCP", 00:20:09.616 "adrfam": "IPv4", 00:20:09.616 "traddr": "10.0.0.2", 00:20:09.616 "trsvcid": "4420" 00:20:09.616 }, 00:20:09.616 "peer_address": { 00:20:09.616 "trtype": "TCP", 00:20:09.616 "adrfam": "IPv4", 00:20:09.616 "traddr": "10.0.0.1", 00:20:09.616 "trsvcid": "36158" 00:20:09.616 }, 00:20:09.616 "auth": { 00:20:09.616 "state": "completed", 00:20:09.616 "digest": "sha256", 00:20:09.616 "dhgroup": "ffdhe2048" 00:20:09.616 } 00:20:09.616 } 00:20:09.616 ]' 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.616 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.875 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:09.875 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:10.443 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.444 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.702 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:10.702 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.702 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.703 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.962 00:20:10.962 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.962 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.962 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.221 { 00:20:11.221 "cntlid": 13, 00:20:11.221 "qid": 0, 00:20:11.221 "state": "enabled", 00:20:11.221 "thread": "nvmf_tgt_poll_group_000", 00:20:11.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.221 "listen_address": { 00:20:11.221 "trtype": "TCP", 00:20:11.221 "adrfam": "IPv4", 00:20:11.221 "traddr": "10.0.0.2", 00:20:11.221 "trsvcid": "4420" 00:20:11.221 }, 00:20:11.221 "peer_address": { 00:20:11.221 "trtype": "TCP", 00:20:11.221 "adrfam": "IPv4", 00:20:11.221 "traddr": "10.0.0.1", 00:20:11.221 "trsvcid": "36188" 00:20:11.221 }, 00:20:11.221 "auth": { 00:20:11.221 "state": "completed", 00:20:11.221 "digest": "sha256", 00:20:11.221 "dhgroup": "ffdhe2048" 00:20:11.221 } 00:20:11.221 } 00:20:11.221 ]' 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.221 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.480 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:11.480 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.048 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.307 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.566 00:20:12.566 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.566 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.566 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.825 { 00:20:12.825 "cntlid": 15, 00:20:12.825 "qid": 0, 00:20:12.825 "state": "enabled", 00:20:12.825 "thread": "nvmf_tgt_poll_group_000", 00:20:12.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.825 "listen_address": { 00:20:12.825 "trtype": "TCP", 00:20:12.825 "adrfam": "IPv4", 00:20:12.825 "traddr": "10.0.0.2", 00:20:12.825 "trsvcid": "4420" 00:20:12.825 }, 00:20:12.825 "peer_address": { 00:20:12.825 "trtype": "TCP", 00:20:12.825 "adrfam": "IPv4", 00:20:12.825 "traddr": "10.0.0.1", 00:20:12.825 "trsvcid": "57094" 00:20:12.825 }, 00:20:12.825 "auth": { 00:20:12.825 "state": "completed", 00:20:12.825 "digest": "sha256", 00:20:12.825 "dhgroup": "ffdhe2048" 00:20:12.825 } 00:20:12.825 } 00:20:12.825 ]' 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.825 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.826 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.826 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.826 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.826 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.084 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:13.084 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:13.652 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.911 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.170 00:20:14.170 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.170 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.170 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.429 { 00:20:14.429 "cntlid": 17, 00:20:14.429 "qid": 0, 00:20:14.429 "state": "enabled", 00:20:14.429 "thread": "nvmf_tgt_poll_group_000", 00:20:14.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.429 "listen_address": { 00:20:14.429 "trtype": "TCP", 00:20:14.429 "adrfam": "IPv4", 00:20:14.429 "traddr": "10.0.0.2", 00:20:14.429 "trsvcid": "4420" 00:20:14.429 }, 00:20:14.429 "peer_address": { 00:20:14.429 "trtype": "TCP", 00:20:14.429 "adrfam": "IPv4", 00:20:14.429 "traddr": "10.0.0.1", 00:20:14.429 "trsvcid": "57116" 00:20:14.429 }, 00:20:14.429 "auth": { 00:20:14.429 "state": "completed", 00:20:14.429 "digest": "sha256", 00:20:14.429 "dhgroup": "ffdhe3072" 00:20:14.429 } 00:20:14.429 } 00:20:14.429 ]' 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.429 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.429 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.429 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.429 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.429 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.430 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.688 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:14.688 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.256 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.515 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.774 00:20:15.774 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.774 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.774 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.033 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.033 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.033 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.033 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.034 { 00:20:16.034 "cntlid": 19, 00:20:16.034 "qid": 0, 00:20:16.034 "state": "enabled", 00:20:16.034 "thread": "nvmf_tgt_poll_group_000", 00:20:16.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.034 "listen_address": { 00:20:16.034 "trtype": "TCP", 00:20:16.034 "adrfam": "IPv4", 00:20:16.034 "traddr": "10.0.0.2", 00:20:16.034 "trsvcid": "4420" 00:20:16.034 }, 00:20:16.034 "peer_address": { 00:20:16.034 "trtype": "TCP", 00:20:16.034 "adrfam": "IPv4", 00:20:16.034 "traddr": "10.0.0.1", 00:20:16.034 "trsvcid": "57146" 00:20:16.034 }, 00:20:16.034 "auth": { 00:20:16.034 "state": "completed", 00:20:16.034 "digest": "sha256", 00:20:16.034 "dhgroup": "ffdhe3072" 00:20:16.034 } 00:20:16.034 } 00:20:16.034 ]' 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.034 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.294 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:16.294 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.861 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.120 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.378 00:20:17.378 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.378 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.379 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.379 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.379 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.379 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.379 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.638 { 00:20:17.638 "cntlid": 21, 00:20:17.638 "qid": 0, 00:20:17.638 "state": "enabled", 00:20:17.638 "thread": "nvmf_tgt_poll_group_000", 00:20:17.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.638 "listen_address": { 00:20:17.638 "trtype": "TCP", 00:20:17.638 "adrfam": "IPv4", 00:20:17.638 "traddr": "10.0.0.2", 00:20:17.638 "trsvcid": "4420" 00:20:17.638 }, 00:20:17.638 "peer_address": { 00:20:17.638 "trtype": "TCP", 00:20:17.638 "adrfam": "IPv4", 00:20:17.638 "traddr": "10.0.0.1", 00:20:17.638 "trsvcid": "57176" 00:20:17.638 }, 00:20:17.638 "auth": { 00:20:17.638 "state": "completed", 00:20:17.638 "digest": "sha256", 00:20:17.638 "dhgroup": "ffdhe3072" 00:20:17.638 } 00:20:17.638 } 00:20:17.638 ]' 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.638 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.898 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:17.898 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:18.465 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.466 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.466 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.724 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.724 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.724 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.724 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.724 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.983 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.983 { 00:20:18.983 "cntlid": 23, 00:20:18.983 "qid": 0, 00:20:18.983 "state": "enabled", 00:20:18.983 "thread": "nvmf_tgt_poll_group_000", 00:20:18.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.983 "listen_address": { 00:20:18.983 "trtype": "TCP", 00:20:18.983 "adrfam": "IPv4", 00:20:18.983 "traddr": "10.0.0.2", 00:20:18.983 "trsvcid": "4420" 00:20:18.983 }, 00:20:18.983 "peer_address": { 00:20:18.983 "trtype": "TCP", 00:20:18.983 "adrfam": "IPv4", 00:20:18.983 "traddr": "10.0.0.1", 00:20:18.983 "trsvcid": "57202" 00:20:18.983 }, 00:20:18.983 "auth": { 00:20:18.984 "state": "completed", 00:20:18.984 "digest": "sha256", 00:20:18.984 "dhgroup": "ffdhe3072" 00:20:18.984 } 00:20:18.984 } 00:20:18.984 ]' 00:20:18.984 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.242 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.242 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.242 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.242 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.243 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.243 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.243 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.501 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:19.501 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.070 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.329 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.588 { 00:20:20.588 "cntlid": 25, 00:20:20.588 "qid": 0, 00:20:20.588 "state": "enabled", 00:20:20.588 "thread": "nvmf_tgt_poll_group_000", 00:20:20.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.588 "listen_address": { 00:20:20.588 "trtype": "TCP", 00:20:20.588 "adrfam": "IPv4", 00:20:20.588 "traddr": "10.0.0.2", 00:20:20.588 "trsvcid": "4420" 00:20:20.588 }, 00:20:20.588 "peer_address": { 00:20:20.588 "trtype": "TCP", 00:20:20.588 "adrfam": "IPv4", 00:20:20.588 "traddr": "10.0.0.1", 00:20:20.588 "trsvcid": "57236" 00:20:20.588 }, 00:20:20.588 "auth": { 00:20:20.588 "state": "completed", 00:20:20.588 "digest": "sha256", 00:20:20.588 "dhgroup": "ffdhe4096" 00:20:20.588 } 00:20:20.588 } 00:20:20.588 ]' 00:20:20.588 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.847 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.106 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:21.106 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.674 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.933 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.192 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.192 { 00:20:22.192 "cntlid": 27, 00:20:22.192 "qid": 0, 00:20:22.192 "state": "enabled", 00:20:22.192 "thread": "nvmf_tgt_poll_group_000", 00:20:22.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.193 "listen_address": { 00:20:22.193 "trtype": "TCP", 00:20:22.193 "adrfam": "IPv4", 00:20:22.193 "traddr": "10.0.0.2", 00:20:22.193 "trsvcid": "4420" 00:20:22.193 }, 00:20:22.193 "peer_address": { 00:20:22.193 "trtype": "TCP", 00:20:22.193 "adrfam": "IPv4", 00:20:22.193 "traddr": "10.0.0.1", 00:20:22.193 "trsvcid": "54640" 00:20:22.193 }, 00:20:22.193 "auth": { 00:20:22.193 "state": "completed", 00:20:22.193 "digest": "sha256", 00:20:22.193 "dhgroup": "ffdhe4096" 00:20:22.193 } 00:20:22.193 } 00:20:22.193 ]' 00:20:22.193 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.452 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.711 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:22.711 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.279 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.537 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.796 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.796 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.055 { 00:20:24.055 "cntlid": 29, 00:20:24.055 "qid": 0, 00:20:24.055 "state": "enabled", 00:20:24.055 "thread": "nvmf_tgt_poll_group_000", 00:20:24.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.055 "listen_address": { 00:20:24.055 "trtype": "TCP", 00:20:24.055 "adrfam": "IPv4", 00:20:24.055 "traddr": "10.0.0.2", 00:20:24.055 "trsvcid": "4420" 00:20:24.055 }, 00:20:24.055 "peer_address": { 00:20:24.055 "trtype": "TCP", 00:20:24.055 "adrfam": "IPv4", 00:20:24.055 "traddr": "10.0.0.1", 00:20:24.055 "trsvcid": "54660" 00:20:24.055 }, 00:20:24.055 "auth": { 00:20:24.055 "state": "completed", 00:20:24.055 "digest": "sha256", 00:20:24.055 "dhgroup": "ffdhe4096" 00:20:24.055 } 00:20:24.055 } 00:20:24.055 ]' 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.055 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.314 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:24.314 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.882 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.141 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.400 00:20:25.400 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.400 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.400 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.400 { 00:20:25.400 "cntlid": 31, 00:20:25.400 "qid": 0, 00:20:25.400 "state": "enabled", 00:20:25.400 "thread": "nvmf_tgt_poll_group_000", 00:20:25.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.400 "listen_address": { 00:20:25.400 "trtype": "TCP", 00:20:25.400 "adrfam": "IPv4", 00:20:25.400 "traddr": "10.0.0.2", 00:20:25.400 "trsvcid": "4420" 00:20:25.400 }, 00:20:25.400 "peer_address": { 00:20:25.400 "trtype": "TCP", 00:20:25.400 "adrfam": "IPv4", 00:20:25.400 "traddr": "10.0.0.1", 00:20:25.400 "trsvcid": "54698" 00:20:25.400 }, 00:20:25.400 "auth": { 00:20:25.400 "state": "completed", 00:20:25.400 "digest": "sha256", 00:20:25.400 "dhgroup": "ffdhe4096" 00:20:25.400 } 00:20:25.400 } 00:20:25.400 ]' 00:20:25.400 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.659 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.917 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:25.917 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:26.484 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.484 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.484 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.484 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.484 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.484 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.484 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.484 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.484 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.742 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.001 00:20:27.001 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.001 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.001 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.259 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.259 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.259 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.259 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.259 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.259 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.259 { 00:20:27.259 "cntlid": 33, 00:20:27.260 "qid": 0, 00:20:27.260 "state": "enabled", 00:20:27.260 "thread": "nvmf_tgt_poll_group_000", 00:20:27.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.260 "listen_address": { 00:20:27.260 "trtype": "TCP", 00:20:27.260 "adrfam": "IPv4", 00:20:27.260 "traddr": "10.0.0.2", 00:20:27.260 "trsvcid": "4420" 00:20:27.260 }, 00:20:27.260 "peer_address": { 00:20:27.260 "trtype": "TCP", 00:20:27.260 "adrfam": "IPv4", 00:20:27.260 "traddr": "10.0.0.1", 00:20:27.260 "trsvcid": "54718" 00:20:27.260 }, 00:20:27.260 "auth": { 00:20:27.260 "state": "completed", 00:20:27.260 "digest": "sha256", 00:20:27.260 "dhgroup": "ffdhe6144" 00:20:27.260 } 00:20:27.260 } 00:20:27.260 ]' 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.260 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.518 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:27.518 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.085 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.345 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.604 00:20:28.604 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.604 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.604 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.862 { 00:20:28.862 "cntlid": 35, 00:20:28.862 "qid": 0, 00:20:28.862 "state": "enabled", 00:20:28.862 "thread": "nvmf_tgt_poll_group_000", 00:20:28.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.862 "listen_address": { 00:20:28.862 "trtype": "TCP", 00:20:28.862 "adrfam": "IPv4", 00:20:28.862 "traddr": "10.0.0.2", 00:20:28.862 "trsvcid": "4420" 00:20:28.862 }, 00:20:28.862 "peer_address": { 00:20:28.862 "trtype": "TCP", 00:20:28.862 "adrfam": "IPv4", 00:20:28.862 "traddr": "10.0.0.1", 00:20:28.862 "trsvcid": "54756" 00:20:28.862 }, 00:20:28.862 "auth": { 00:20:28.862 "state": "completed", 00:20:28.862 "digest": "sha256", 00:20:28.862 "dhgroup": "ffdhe6144" 00:20:28.862 } 00:20:28.862 } 00:20:28.862 ]' 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.862 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.122 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.122 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.122 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.122 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:29.122 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.690 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.949 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.518 00:20:30.518 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.518 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.518 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.518 { 00:20:30.518 "cntlid": 37, 00:20:30.518 "qid": 0, 00:20:30.518 "state": "enabled", 00:20:30.518 "thread": "nvmf_tgt_poll_group_000", 00:20:30.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.518 "listen_address": { 00:20:30.518 "trtype": "TCP", 00:20:30.518 "adrfam": "IPv4", 00:20:30.518 "traddr": "10.0.0.2", 00:20:30.518 "trsvcid": "4420" 00:20:30.518 }, 00:20:30.518 "peer_address": { 00:20:30.518 "trtype": "TCP", 00:20:30.518 "adrfam": "IPv4", 00:20:30.518 "traddr": "10.0.0.1", 00:20:30.518 "trsvcid": "54768" 00:20:30.518 }, 00:20:30.518 "auth": { 00:20:30.518 "state": "completed", 00:20:30.518 "digest": "sha256", 00:20:30.518 "dhgroup": "ffdhe6144" 00:20:30.518 } 00:20:30.518 } 00:20:30.518 ]' 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.518 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.777 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.777 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.777 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.777 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:30.777 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:31.345 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.345 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.604 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:31.605 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.605 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.605 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.605 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.605 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.605 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.172 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.172 { 00:20:32.172 "cntlid": 39, 00:20:32.172 "qid": 0, 00:20:32.172 "state": "enabled", 00:20:32.172 "thread": "nvmf_tgt_poll_group_000", 00:20:32.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.172 "listen_address": { 00:20:32.172 "trtype": "TCP", 00:20:32.172 "adrfam": "IPv4", 00:20:32.172 "traddr": "10.0.0.2", 00:20:32.172 "trsvcid": "4420" 00:20:32.172 }, 00:20:32.172 "peer_address": { 00:20:32.172 "trtype": "TCP", 00:20:32.172 "adrfam": "IPv4", 00:20:32.172 "traddr": "10.0.0.1", 00:20:32.172 "trsvcid": "43368" 00:20:32.172 }, 00:20:32.172 "auth": { 00:20:32.172 "state": "completed", 00:20:32.172 "digest": "sha256", 00:20:32.172 "dhgroup": "ffdhe6144" 00:20:32.172 } 00:20:32.172 } 00:20:32.172 ]' 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.172 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.173 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.432 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.432 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.432 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.432 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.432 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.691 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:32.691 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.259 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.826 00:20:33.826 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.826 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.826 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.086 { 00:20:34.086 "cntlid": 41, 00:20:34.086 "qid": 0, 00:20:34.086 "state": "enabled", 00:20:34.086 "thread": "nvmf_tgt_poll_group_000", 00:20:34.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.086 "listen_address": { 00:20:34.086 "trtype": "TCP", 00:20:34.086 "adrfam": "IPv4", 00:20:34.086 "traddr": "10.0.0.2", 00:20:34.086 "trsvcid": "4420" 00:20:34.086 }, 00:20:34.086 "peer_address": { 00:20:34.086 "trtype": "TCP", 00:20:34.086 "adrfam": "IPv4", 00:20:34.086 "traddr": "10.0.0.1", 00:20:34.086 "trsvcid": "43378" 00:20:34.086 }, 00:20:34.086 "auth": { 00:20:34.086 "state": "completed", 00:20:34.086 "digest": "sha256", 00:20:34.086 "dhgroup": "ffdhe8192" 00:20:34.086 } 00:20:34.086 } 00:20:34.086 ]' 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.086 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.344 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:34.344 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.912 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.171 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:35.171 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.171 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.171 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.171 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.172 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.740 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.740 { 00:20:35.740 "cntlid": 43, 00:20:35.740 "qid": 0, 00:20:35.740 "state": "enabled", 00:20:35.740 "thread": "nvmf_tgt_poll_group_000", 00:20:35.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.740 "listen_address": { 00:20:35.740 "trtype": "TCP", 00:20:35.740 "adrfam": "IPv4", 00:20:35.740 "traddr": "10.0.0.2", 00:20:35.740 "trsvcid": "4420" 00:20:35.740 }, 00:20:35.740 "peer_address": { 00:20:35.740 "trtype": "TCP", 00:20:35.740 "adrfam": "IPv4", 00:20:35.740 "traddr": "10.0.0.1", 00:20:35.740 "trsvcid": "43404" 00:20:35.740 }, 00:20:35.740 "auth": { 00:20:35.740 "state": "completed", 00:20:35.740 "digest": "sha256", 00:20:35.740 "dhgroup": "ffdhe8192" 00:20:35.740 } 00:20:35.740 } 00:20:35.740 ]' 00:20:35.740 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.999 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.258 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:36.258 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.826 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.086 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.086 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.086 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.086 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.344 00:20:37.344 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.344 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.344 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.603 { 00:20:37.603 "cntlid": 45, 00:20:37.603 "qid": 0, 00:20:37.603 "state": "enabled", 00:20:37.603 "thread": "nvmf_tgt_poll_group_000", 00:20:37.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.603 "listen_address": { 00:20:37.603 "trtype": "TCP", 00:20:37.603 "adrfam": "IPv4", 00:20:37.603 "traddr": "10.0.0.2", 00:20:37.603 "trsvcid": "4420" 00:20:37.603 }, 00:20:37.603 "peer_address": { 00:20:37.603 "trtype": "TCP", 00:20:37.603 "adrfam": "IPv4", 00:20:37.603 "traddr": "10.0.0.1", 00:20:37.603 "trsvcid": "43416" 00:20:37.603 }, 00:20:37.603 "auth": { 00:20:37.603 "state": "completed", 00:20:37.603 "digest": "sha256", 00:20:37.603 "dhgroup": "ffdhe8192" 00:20:37.603 } 00:20:37.603 } 00:20:37.603 ]' 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.603 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.861 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.861 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.861 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.861 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.861 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.862 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:37.862 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:38.429 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.429 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.429 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.429 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.689 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.257 00:20:39.257 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.257 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.257 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.517 { 00:20:39.517 "cntlid": 47, 00:20:39.517 "qid": 0, 00:20:39.517 "state": "enabled", 00:20:39.517 "thread": "nvmf_tgt_poll_group_000", 00:20:39.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.517 "listen_address": { 00:20:39.517 "trtype": "TCP", 00:20:39.517 "adrfam": "IPv4", 00:20:39.517 "traddr": "10.0.0.2", 00:20:39.517 "trsvcid": "4420" 00:20:39.517 }, 00:20:39.517 "peer_address": { 00:20:39.517 "trtype": "TCP", 00:20:39.517 "adrfam": "IPv4", 00:20:39.517 "traddr": "10.0.0.1", 00:20:39.517 "trsvcid": "43440" 00:20:39.517 }, 00:20:39.517 "auth": { 00:20:39.517 "state": "completed", 00:20:39.517 "digest": "sha256", 00:20:39.517 "dhgroup": "ffdhe8192" 00:20:39.517 } 00:20:39.517 } 00:20:39.517 ]' 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.517 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.776 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:39.776 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.345 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.604 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.863 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.864 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.123 { 00:20:41.123 "cntlid": 49, 00:20:41.123 "qid": 0, 00:20:41.123 "state": "enabled", 00:20:41.123 "thread": "nvmf_tgt_poll_group_000", 00:20:41.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.123 "listen_address": { 00:20:41.123 "trtype": "TCP", 00:20:41.123 "adrfam": "IPv4", 00:20:41.123 "traddr": "10.0.0.2", 00:20:41.123 "trsvcid": "4420" 00:20:41.123 }, 00:20:41.123 "peer_address": { 00:20:41.123 "trtype": "TCP", 00:20:41.123 "adrfam": "IPv4", 00:20:41.123 "traddr": "10.0.0.1", 00:20:41.123 "trsvcid": "43466" 00:20:41.123 }, 00:20:41.123 "auth": { 00:20:41.123 "state": "completed", 00:20:41.123 "digest": "sha384", 00:20:41.123 "dhgroup": "null" 00:20:41.123 } 00:20:41.123 } 00:20:41.123 ]' 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.123 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.382 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:41.382 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:41.951 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.210 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.210 00:20:42.469 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.469 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.469 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.469 { 00:20:42.469 "cntlid": 51, 00:20:42.469 "qid": 0, 00:20:42.469 "state": "enabled", 00:20:42.469 "thread": "nvmf_tgt_poll_group_000", 00:20:42.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.469 "listen_address": { 00:20:42.469 "trtype": "TCP", 00:20:42.469 "adrfam": "IPv4", 00:20:42.469 "traddr": "10.0.0.2", 00:20:42.469 "trsvcid": "4420" 00:20:42.469 }, 00:20:42.469 "peer_address": { 00:20:42.469 "trtype": "TCP", 00:20:42.469 "adrfam": "IPv4", 00:20:42.469 "traddr": "10.0.0.1", 00:20:42.469 "trsvcid": "33036" 00:20:42.469 }, 00:20:42.469 "auth": { 00:20:42.469 "state": "completed", 00:20:42.469 "digest": "sha384", 00:20:42.469 "dhgroup": "null" 00:20:42.469 } 00:20:42.469 } 00:20:42.469 ]' 00:20:42.469 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.728 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.987 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:42.988 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.557 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.815 00:20:43.815 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.816 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.816 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.074 { 00:20:44.074 "cntlid": 53, 00:20:44.074 "qid": 0, 00:20:44.074 "state": "enabled", 00:20:44.074 "thread": "nvmf_tgt_poll_group_000", 00:20:44.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.074 "listen_address": { 00:20:44.074 "trtype": "TCP", 00:20:44.074 "adrfam": "IPv4", 00:20:44.074 "traddr": "10.0.0.2", 00:20:44.074 "trsvcid": "4420" 00:20:44.074 }, 00:20:44.074 "peer_address": { 00:20:44.074 "trtype": "TCP", 00:20:44.074 "adrfam": "IPv4", 00:20:44.074 "traddr": "10.0.0.1", 00:20:44.074 "trsvcid": "33062" 00:20:44.074 }, 00:20:44.074 "auth": { 00:20:44.074 "state": "completed", 00:20:44.074 "digest": "sha384", 00:20:44.074 "dhgroup": "null" 00:20:44.074 } 00:20:44.074 } 00:20:44.074 ]' 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.074 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.333 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.333 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.333 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.333 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.333 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.592 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:44.592 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.161 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.420 00:20:45.420 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.420 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.420 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.679 { 00:20:45.679 "cntlid": 55, 00:20:45.679 "qid": 0, 00:20:45.679 "state": "enabled", 00:20:45.679 "thread": "nvmf_tgt_poll_group_000", 00:20:45.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.679 "listen_address": { 00:20:45.679 "trtype": "TCP", 00:20:45.679 "adrfam": "IPv4", 00:20:45.679 "traddr": "10.0.0.2", 00:20:45.679 "trsvcid": "4420" 00:20:45.679 }, 00:20:45.679 "peer_address": { 00:20:45.679 "trtype": "TCP", 00:20:45.679 "adrfam": "IPv4", 00:20:45.679 "traddr": "10.0.0.1", 00:20:45.679 "trsvcid": "33080" 00:20:45.679 }, 00:20:45.679 "auth": { 00:20:45.679 "state": "completed", 00:20:45.679 "digest": "sha384", 00:20:45.679 "dhgroup": "null" 00:20:45.679 } 00:20:45.679 } 00:20:45.679 ]' 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.679 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.938 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.938 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.938 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.938 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.938 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.198 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:46.198 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.766 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.025 00:20:47.025 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.025 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.025 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.284 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.284 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.284 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.284 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.284 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.285 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.285 { 00:20:47.285 "cntlid": 57, 00:20:47.285 "qid": 0, 00:20:47.285 "state": "enabled", 00:20:47.285 "thread": "nvmf_tgt_poll_group_000", 00:20:47.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.285 "listen_address": { 00:20:47.285 "trtype": "TCP", 00:20:47.285 "adrfam": "IPv4", 00:20:47.285 "traddr": "10.0.0.2", 00:20:47.285 "trsvcid": "4420" 00:20:47.285 }, 00:20:47.285 "peer_address": { 00:20:47.285 "trtype": "TCP", 00:20:47.285 "adrfam": "IPv4", 00:20:47.285 "traddr": "10.0.0.1", 00:20:47.285 "trsvcid": "33102" 00:20:47.285 }, 00:20:47.285 "auth": { 00:20:47.285 "state": "completed", 00:20:47.285 "digest": "sha384", 00:20:47.285 "dhgroup": "ffdhe2048" 00:20:47.285 } 00:20:47.285 } 00:20:47.285 ]' 00:20:47.285 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.285 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.285 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.544 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.544 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.544 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.544 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.544 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.544 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:47.544 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.111 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.371 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.371 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.371 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.371 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.371 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.630 00:20:48.630 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.630 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.630 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.890 { 00:20:48.890 "cntlid": 59, 00:20:48.890 "qid": 0, 00:20:48.890 "state": "enabled", 00:20:48.890 "thread": "nvmf_tgt_poll_group_000", 00:20:48.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.890 "listen_address": { 00:20:48.890 "trtype": "TCP", 00:20:48.890 "adrfam": "IPv4", 00:20:48.890 "traddr": "10.0.0.2", 00:20:48.890 "trsvcid": "4420" 00:20:48.890 }, 00:20:48.890 "peer_address": { 00:20:48.890 "trtype": "TCP", 00:20:48.890 "adrfam": "IPv4", 00:20:48.890 "traddr": "10.0.0.1", 00:20:48.890 "trsvcid": "33124" 00:20:48.890 }, 00:20:48.890 "auth": { 00:20:48.890 "state": "completed", 00:20:48.890 "digest": "sha384", 00:20:48.890 "dhgroup": "ffdhe2048" 00:20:48.890 } 00:20:48.890 } 00:20:48.890 ]' 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.890 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:49.149 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.717 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.976 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.235 00:20:50.235 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.235 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.235 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.495 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.495 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.495 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.495 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.495 { 00:20:50.495 "cntlid": 61, 00:20:50.495 "qid": 0, 00:20:50.495 "state": "enabled", 00:20:50.495 "thread": "nvmf_tgt_poll_group_000", 00:20:50.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.495 "listen_address": { 00:20:50.495 "trtype": "TCP", 00:20:50.495 "adrfam": "IPv4", 00:20:50.495 "traddr": "10.0.0.2", 00:20:50.495 "trsvcid": "4420" 00:20:50.495 }, 00:20:50.495 "peer_address": { 00:20:50.495 "trtype": "TCP", 00:20:50.495 "adrfam": "IPv4", 00:20:50.495 "traddr": "10.0.0.1", 00:20:50.495 "trsvcid": "33150" 00:20:50.495 }, 00:20:50.495 "auth": { 00:20:50.495 "state": "completed", 00:20:50.495 "digest": "sha384", 00:20:50.495 "dhgroup": "ffdhe2048" 00:20:50.495 } 00:20:50.495 } 00:20:50.495 ]' 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.495 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.754 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:50.754 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.322 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.581 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.840 00:20:51.840 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.840 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.840 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.099 { 00:20:52.099 "cntlid": 63, 00:20:52.099 "qid": 0, 00:20:52.099 "state": "enabled", 00:20:52.099 "thread": "nvmf_tgt_poll_group_000", 00:20:52.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.099 "listen_address": { 00:20:52.099 "trtype": "TCP", 00:20:52.099 "adrfam": "IPv4", 00:20:52.099 "traddr": "10.0.0.2", 00:20:52.099 "trsvcid": "4420" 00:20:52.099 }, 00:20:52.099 "peer_address": { 00:20:52.099 "trtype": "TCP", 00:20:52.099 "adrfam": "IPv4", 00:20:52.099 "traddr": "10.0.0.1", 00:20:52.099 "trsvcid": "44892" 00:20:52.099 }, 00:20:52.099 "auth": { 00:20:52.099 "state": "completed", 00:20:52.099 "digest": "sha384", 00:20:52.099 "dhgroup": "ffdhe2048" 00:20:52.099 } 00:20:52.099 } 00:20:52.099 ]' 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.099 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.100 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.359 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:52.359 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.927 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.186 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.445 00:20:53.445 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.445 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.445 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.705 { 00:20:53.705 "cntlid": 65, 00:20:53.705 "qid": 0, 00:20:53.705 "state": "enabled", 00:20:53.705 "thread": "nvmf_tgt_poll_group_000", 00:20:53.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.705 "listen_address": { 00:20:53.705 "trtype": "TCP", 00:20:53.705 "adrfam": "IPv4", 00:20:53.705 "traddr": "10.0.0.2", 00:20:53.705 "trsvcid": "4420" 00:20:53.705 }, 00:20:53.705 "peer_address": { 00:20:53.705 "trtype": "TCP", 00:20:53.705 "adrfam": "IPv4", 00:20:53.705 "traddr": "10.0.0.1", 00:20:53.705 "trsvcid": "44918" 00:20:53.705 }, 00:20:53.705 "auth": { 00:20:53.705 "state": "completed", 00:20:53.705 "digest": "sha384", 00:20:53.705 "dhgroup": "ffdhe3072" 00:20:53.705 } 00:20:53.705 } 00:20:53.705 ]' 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.705 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.964 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:53.964 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.533 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.792 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.051 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.051 { 00:20:55.051 "cntlid": 67, 00:20:55.051 "qid": 0, 00:20:55.051 "state": "enabled", 00:20:55.051 "thread": "nvmf_tgt_poll_group_000", 00:20:55.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.051 "listen_address": { 00:20:55.051 "trtype": "TCP", 00:20:55.051 "adrfam": "IPv4", 00:20:55.051 "traddr": "10.0.0.2", 00:20:55.051 "trsvcid": "4420" 00:20:55.051 }, 00:20:55.051 "peer_address": { 00:20:55.051 "trtype": "TCP", 00:20:55.051 "adrfam": "IPv4", 00:20:55.051 "traddr": "10.0.0.1", 00:20:55.051 "trsvcid": "44928" 00:20:55.051 }, 00:20:55.051 "auth": { 00:20:55.051 "state": "completed", 00:20:55.051 "digest": "sha384", 00:20:55.051 "dhgroup": "ffdhe3072" 00:20:55.051 } 00:20:55.051 } 00:20:55.051 ]' 00:20:55.051 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.310 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.568 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:55.568 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.137 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.396 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.655 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.655 { 00:20:56.655 "cntlid": 69, 00:20:56.655 "qid": 0, 00:20:56.655 "state": "enabled", 00:20:56.655 "thread": "nvmf_tgt_poll_group_000", 00:20:56.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.655 "listen_address": { 00:20:56.655 "trtype": "TCP", 00:20:56.655 "adrfam": "IPv4", 00:20:56.655 "traddr": "10.0.0.2", 00:20:56.655 "trsvcid": "4420" 00:20:56.655 }, 00:20:56.655 "peer_address": { 00:20:56.655 "trtype": "TCP", 00:20:56.655 "adrfam": "IPv4", 00:20:56.655 "traddr": "10.0.0.1", 00:20:56.655 "trsvcid": "44950" 00:20:56.655 }, 00:20:56.655 "auth": { 00:20:56.655 "state": "completed", 00:20:56.655 "digest": "sha384", 00:20:56.655 "dhgroup": "ffdhe3072" 00:20:56.655 } 00:20:56.655 } 00:20:56.655 ]' 00:20:56.655 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.914 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.173 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:57.173 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.741 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.001 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.001 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.260 { 00:20:58.260 "cntlid": 71, 00:20:58.260 "qid": 0, 00:20:58.260 "state": "enabled", 00:20:58.260 "thread": "nvmf_tgt_poll_group_000", 00:20:58.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.260 "listen_address": { 00:20:58.260 "trtype": "TCP", 00:20:58.260 "adrfam": "IPv4", 00:20:58.260 "traddr": "10.0.0.2", 00:20:58.260 "trsvcid": "4420" 00:20:58.260 }, 00:20:58.260 "peer_address": { 00:20:58.260 "trtype": "TCP", 00:20:58.260 "adrfam": "IPv4", 00:20:58.260 "traddr": "10.0.0.1", 00:20:58.260 "trsvcid": "44972" 00:20:58.260 }, 00:20:58.260 "auth": { 00:20:58.260 "state": "completed", 00:20:58.260 "digest": "sha384", 00:20:58.260 "dhgroup": "ffdhe3072" 00:20:58.260 } 00:20:58.260 } 00:20:58.260 ]' 00:20:58.260 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.520 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.520 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.520 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.520 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.520 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.520 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.520 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.779 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:58.779 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.347 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.347 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.607 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.607 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.607 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.607 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.866 00:20:59.866 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.866 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.866 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.866 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.866 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.867 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.867 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.867 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.867 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.867 { 00:20:59.867 "cntlid": 73, 00:20:59.867 "qid": 0, 00:20:59.867 "state": "enabled", 00:20:59.867 "thread": "nvmf_tgt_poll_group_000", 00:20:59.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.867 "listen_address": { 00:20:59.867 "trtype": "TCP", 00:20:59.867 "adrfam": "IPv4", 00:20:59.867 "traddr": "10.0.0.2", 00:20:59.867 "trsvcid": "4420" 00:20:59.867 }, 00:20:59.867 "peer_address": { 00:20:59.867 "trtype": "TCP", 00:20:59.867 "adrfam": "IPv4", 00:20:59.867 "traddr": "10.0.0.1", 00:20:59.867 "trsvcid": "45008" 00:20:59.867 }, 00:20:59.867 "auth": { 00:20:59.867 "state": "completed", 00:20:59.867 "digest": "sha384", 00:20:59.867 "dhgroup": "ffdhe4096" 00:20:59.867 } 00:20:59.867 } 00:20:59.867 ]' 00:20:59.867 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.125 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.384 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:00.384 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.952 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.212 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.472 00:21:01.472 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.472 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.472 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.472 { 00:21:01.472 "cntlid": 75, 00:21:01.472 "qid": 0, 00:21:01.472 "state": "enabled", 00:21:01.472 "thread": "nvmf_tgt_poll_group_000", 00:21:01.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.472 "listen_address": { 00:21:01.472 "trtype": "TCP", 00:21:01.472 "adrfam": "IPv4", 00:21:01.472 "traddr": "10.0.0.2", 00:21:01.472 "trsvcid": "4420" 00:21:01.472 }, 00:21:01.472 "peer_address": { 00:21:01.472 "trtype": "TCP", 00:21:01.472 "adrfam": "IPv4", 00:21:01.472 "traddr": "10.0.0.1", 00:21:01.472 "trsvcid": "50812" 00:21:01.472 }, 00:21:01.472 "auth": { 00:21:01.472 "state": "completed", 00:21:01.472 "digest": "sha384", 00:21:01.472 "dhgroup": "ffdhe4096" 00:21:01.472 } 00:21:01.472 } 00:21:01.472 ]' 00:21:01.472 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.731 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.990 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:01.990 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:02.558 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.559 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.817 00:21:02.817 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.817 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.817 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.076 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.076 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.076 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.076 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.076 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.076 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.076 { 00:21:03.076 "cntlid": 77, 00:21:03.076 "qid": 0, 00:21:03.076 "state": "enabled", 00:21:03.076 "thread": "nvmf_tgt_poll_group_000", 00:21:03.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.077 "listen_address": { 00:21:03.077 "trtype": "TCP", 00:21:03.077 "adrfam": "IPv4", 00:21:03.077 "traddr": "10.0.0.2", 00:21:03.077 "trsvcid": "4420" 00:21:03.077 }, 00:21:03.077 "peer_address": { 00:21:03.077 "trtype": "TCP", 00:21:03.077 "adrfam": "IPv4", 00:21:03.077 "traddr": "10.0.0.1", 00:21:03.077 "trsvcid": "50844" 00:21:03.077 }, 00:21:03.077 "auth": { 00:21:03.077 "state": "completed", 00:21:03.077 "digest": "sha384", 00:21:03.077 "dhgroup": "ffdhe4096" 00:21:03.077 } 00:21:03.077 } 00:21:03.077 ]' 00:21:03.077 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.077 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.077 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.336 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.336 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.336 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.336 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.336 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.336 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:03.336 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:03.902 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.902 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.902 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.902 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.161 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.419 00:21:04.419 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.419 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.419 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.678 { 00:21:04.678 "cntlid": 79, 00:21:04.678 "qid": 0, 00:21:04.678 "state": "enabled", 00:21:04.678 "thread": "nvmf_tgt_poll_group_000", 00:21:04.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.678 "listen_address": { 00:21:04.678 "trtype": "TCP", 00:21:04.678 "adrfam": "IPv4", 00:21:04.678 "traddr": "10.0.0.2", 00:21:04.678 "trsvcid": "4420" 00:21:04.678 }, 00:21:04.678 "peer_address": { 00:21:04.678 "trtype": "TCP", 00:21:04.678 "adrfam": "IPv4", 00:21:04.678 "traddr": "10.0.0.1", 00:21:04.678 "trsvcid": "50864" 00:21:04.678 }, 00:21:04.678 "auth": { 00:21:04.678 "state": "completed", 00:21:04.678 "digest": "sha384", 00:21:04.678 "dhgroup": "ffdhe4096" 00:21:04.678 } 00:21:04.678 } 00:21:04.678 ]' 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.678 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.937 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.937 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.937 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.937 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.937 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.196 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:05.196 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.765 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.334 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.334 { 00:21:06.334 "cntlid": 81, 00:21:06.334 "qid": 0, 00:21:06.334 "state": "enabled", 00:21:06.334 "thread": "nvmf_tgt_poll_group_000", 00:21:06.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.334 "listen_address": { 00:21:06.334 "trtype": "TCP", 00:21:06.334 "adrfam": "IPv4", 00:21:06.334 "traddr": "10.0.0.2", 00:21:06.334 "trsvcid": "4420" 00:21:06.334 }, 00:21:06.334 "peer_address": { 00:21:06.334 "trtype": "TCP", 00:21:06.334 "adrfam": "IPv4", 00:21:06.334 "traddr": "10.0.0.1", 00:21:06.334 "trsvcid": "50902" 00:21:06.334 }, 00:21:06.334 "auth": { 00:21:06.334 "state": "completed", 00:21:06.334 "digest": "sha384", 00:21:06.334 "dhgroup": "ffdhe6144" 00:21:06.334 } 00:21:06.334 } 00:21:06.334 ]' 00:21:06.334 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.593 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.852 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:06.852 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.421 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.421 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.989 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.989 { 00:21:07.989 "cntlid": 83, 00:21:07.989 "qid": 0, 00:21:07.989 "state": "enabled", 00:21:07.989 "thread": "nvmf_tgt_poll_group_000", 00:21:07.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.989 "listen_address": { 00:21:07.989 "trtype": "TCP", 00:21:07.989 "adrfam": "IPv4", 00:21:07.989 "traddr": "10.0.0.2", 00:21:07.989 "trsvcid": "4420" 00:21:07.989 }, 00:21:07.989 "peer_address": { 00:21:07.989 "trtype": "TCP", 00:21:07.989 "adrfam": "IPv4", 00:21:07.989 "traddr": "10.0.0.1", 00:21:07.989 "trsvcid": "50922" 00:21:07.989 }, 00:21:07.989 "auth": { 00:21:07.989 "state": "completed", 00:21:07.989 "digest": "sha384", 00:21:07.989 "dhgroup": "ffdhe6144" 00:21:07.989 } 00:21:07.989 } 00:21:07.989 ]' 00:21:07.989 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.247 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.507 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:08.507 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.076 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.645 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.645 { 00:21:09.645 "cntlid": 85, 00:21:09.645 "qid": 0, 00:21:09.645 "state": "enabled", 00:21:09.645 "thread": "nvmf_tgt_poll_group_000", 00:21:09.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.645 "listen_address": { 00:21:09.645 "trtype": "TCP", 00:21:09.645 "adrfam": "IPv4", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "trsvcid": "4420" 00:21:09.645 }, 00:21:09.645 "peer_address": { 00:21:09.645 "trtype": "TCP", 00:21:09.645 "adrfam": "IPv4", 00:21:09.645 "traddr": "10.0.0.1", 00:21:09.645 "trsvcid": "50966" 00:21:09.645 }, 00:21:09.645 "auth": { 00:21:09.645 "state": "completed", 00:21:09.645 "digest": "sha384", 00:21:09.645 "dhgroup": "ffdhe6144" 00:21:09.645 } 00:21:09.645 } 00:21:09.645 ]' 00:21:09.645 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.904 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.163 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:10.163 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.732 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.300 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.300 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.559 { 00:21:11.559 "cntlid": 87, 00:21:11.559 "qid": 0, 00:21:11.559 "state": "enabled", 00:21:11.559 "thread": "nvmf_tgt_poll_group_000", 00:21:11.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.559 "listen_address": { 00:21:11.559 "trtype": "TCP", 00:21:11.559 "adrfam": "IPv4", 00:21:11.559 "traddr": "10.0.0.2", 00:21:11.559 "trsvcid": "4420" 00:21:11.559 }, 00:21:11.559 "peer_address": { 00:21:11.559 "trtype": "TCP", 00:21:11.559 "adrfam": "IPv4", 00:21:11.559 "traddr": "10.0.0.1", 00:21:11.559 "trsvcid": "44786" 00:21:11.559 }, 00:21:11.559 "auth": { 00:21:11.559 "state": "completed", 00:21:11.559 "digest": "sha384", 00:21:11.559 "dhgroup": "ffdhe6144" 00:21:11.559 } 00:21:11.559 } 00:21:11.559 ]' 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.559 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.818 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:11.818 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.385 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.644 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.903 00:21:12.903 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.903 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.903 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.162 { 00:21:13.162 "cntlid": 89, 00:21:13.162 "qid": 0, 00:21:13.162 "state": "enabled", 00:21:13.162 "thread": "nvmf_tgt_poll_group_000", 00:21:13.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.162 "listen_address": { 00:21:13.162 "trtype": "TCP", 00:21:13.162 "adrfam": "IPv4", 00:21:13.162 "traddr": "10.0.0.2", 00:21:13.162 "trsvcid": "4420" 00:21:13.162 }, 00:21:13.162 "peer_address": { 00:21:13.162 "trtype": "TCP", 00:21:13.162 "adrfam": "IPv4", 00:21:13.162 "traddr": "10.0.0.1", 00:21:13.162 "trsvcid": "44812" 00:21:13.162 }, 00:21:13.162 "auth": { 00:21:13.162 "state": "completed", 00:21:13.162 "digest": "sha384", 00:21:13.162 "dhgroup": "ffdhe8192" 00:21:13.162 } 00:21:13.162 } 00:21:13.162 ]' 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.162 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.421 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.421 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.421 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.421 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.421 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.680 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:13.680 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:14.248 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.248 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.248 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.248 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.249 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.817 00:21:14.817 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.817 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.817 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.076 { 00:21:15.076 "cntlid": 91, 00:21:15.076 "qid": 0, 00:21:15.076 "state": "enabled", 00:21:15.076 "thread": "nvmf_tgt_poll_group_000", 00:21:15.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.076 "listen_address": { 00:21:15.076 "trtype": "TCP", 00:21:15.076 "adrfam": "IPv4", 00:21:15.076 "traddr": "10.0.0.2", 00:21:15.076 "trsvcid": "4420" 00:21:15.076 }, 00:21:15.076 "peer_address": { 00:21:15.076 "trtype": "TCP", 00:21:15.076 "adrfam": "IPv4", 00:21:15.076 "traddr": "10.0.0.1", 00:21:15.076 "trsvcid": "44838" 00:21:15.076 }, 00:21:15.076 "auth": { 00:21:15.076 "state": "completed", 00:21:15.076 "digest": "sha384", 00:21:15.076 "dhgroup": "ffdhe8192" 00:21:15.076 } 00:21:15.076 } 00:21:15.076 ]' 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.076 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.335 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:15.335 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.903 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.162 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.731 00:21:16.731 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.731 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.731 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.990 { 00:21:16.990 "cntlid": 93, 00:21:16.990 "qid": 0, 00:21:16.990 "state": "enabled", 00:21:16.990 "thread": "nvmf_tgt_poll_group_000", 00:21:16.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.990 "listen_address": { 00:21:16.990 "trtype": "TCP", 00:21:16.990 "adrfam": "IPv4", 00:21:16.990 "traddr": "10.0.0.2", 00:21:16.990 "trsvcid": "4420" 00:21:16.990 }, 00:21:16.990 "peer_address": { 00:21:16.990 "trtype": "TCP", 00:21:16.990 "adrfam": "IPv4", 00:21:16.990 "traddr": "10.0.0.1", 00:21:16.990 "trsvcid": "44872" 00:21:16.990 }, 00:21:16.990 "auth": { 00:21:16.990 "state": "completed", 00:21:16.990 "digest": "sha384", 00:21:16.990 "dhgroup": "ffdhe8192" 00:21:16.990 } 00:21:16.990 } 00:21:16.990 ]' 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.990 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.250 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:17.250 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.818 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.386 00:21:18.386 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.386 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.386 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.645 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.645 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.645 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.645 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.646 { 00:21:18.646 "cntlid": 95, 00:21:18.646 "qid": 0, 00:21:18.646 "state": "enabled", 00:21:18.646 "thread": "nvmf_tgt_poll_group_000", 00:21:18.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.646 "listen_address": { 00:21:18.646 "trtype": "TCP", 00:21:18.646 "adrfam": "IPv4", 00:21:18.646 "traddr": "10.0.0.2", 00:21:18.646 "trsvcid": "4420" 00:21:18.646 }, 00:21:18.646 "peer_address": { 00:21:18.646 "trtype": "TCP", 00:21:18.646 "adrfam": "IPv4", 00:21:18.646 "traddr": "10.0.0.1", 00:21:18.646 "trsvcid": "44890" 00:21:18.646 }, 00:21:18.646 "auth": { 00:21:18.646 "state": "completed", 00:21:18.646 "digest": "sha384", 00:21:18.646 "dhgroup": "ffdhe8192" 00:21:18.646 } 00:21:18.646 } 00:21:18.646 ]' 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.646 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.905 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:18.905 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.472 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.732 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.991 00:21:19.991 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.991 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.991 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.250 { 00:21:20.250 "cntlid": 97, 00:21:20.250 "qid": 0, 00:21:20.250 "state": "enabled", 00:21:20.250 "thread": "nvmf_tgt_poll_group_000", 00:21:20.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.250 "listen_address": { 00:21:20.250 "trtype": "TCP", 00:21:20.250 "adrfam": "IPv4", 00:21:20.250 "traddr": "10.0.0.2", 00:21:20.250 "trsvcid": "4420" 00:21:20.250 }, 00:21:20.250 "peer_address": { 00:21:20.250 "trtype": "TCP", 00:21:20.250 "adrfam": "IPv4", 00:21:20.250 "traddr": "10.0.0.1", 00:21:20.250 "trsvcid": "44914" 00:21:20.250 }, 00:21:20.250 "auth": { 00:21:20.250 "state": "completed", 00:21:20.250 "digest": "sha512", 00:21:20.250 "dhgroup": "null" 00:21:20.250 } 00:21:20.250 } 00:21:20.250 ]' 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.250 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.509 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:20.509 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.078 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.337 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.596 00:21:21.596 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.596 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.596 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.855 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.855 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.855 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.855 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.856 { 00:21:21.856 "cntlid": 99, 00:21:21.856 "qid": 0, 00:21:21.856 "state": "enabled", 00:21:21.856 "thread": "nvmf_tgt_poll_group_000", 00:21:21.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.856 "listen_address": { 00:21:21.856 "trtype": "TCP", 00:21:21.856 "adrfam": "IPv4", 00:21:21.856 "traddr": "10.0.0.2", 00:21:21.856 "trsvcid": "4420" 00:21:21.856 }, 00:21:21.856 "peer_address": { 00:21:21.856 "trtype": "TCP", 00:21:21.856 "adrfam": "IPv4", 00:21:21.856 "traddr": "10.0.0.1", 00:21:21.856 "trsvcid": "44898" 00:21:21.856 }, 00:21:21.856 "auth": { 00:21:21.856 "state": "completed", 00:21:21.856 "digest": "sha512", 00:21:21.856 "dhgroup": "null" 00:21:21.856 } 00:21:21.856 } 00:21:21.856 ]' 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.856 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.114 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:22.114 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.682 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.941 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.200 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.200 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.459 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.459 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.459 { 00:21:23.459 "cntlid": 101, 00:21:23.459 "qid": 0, 00:21:23.459 "state": "enabled", 00:21:23.459 "thread": "nvmf_tgt_poll_group_000", 00:21:23.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.459 "listen_address": { 00:21:23.459 "trtype": "TCP", 00:21:23.459 "adrfam": "IPv4", 00:21:23.459 "traddr": "10.0.0.2", 00:21:23.459 "trsvcid": "4420" 00:21:23.459 }, 00:21:23.459 "peer_address": { 00:21:23.459 "trtype": "TCP", 00:21:23.459 "adrfam": "IPv4", 00:21:23.459 "traddr": "10.0.0.1", 00:21:23.459 "trsvcid": "44936" 00:21:23.459 }, 00:21:23.459 "auth": { 00:21:23.459 "state": "completed", 00:21:23.459 "digest": "sha512", 00:21:23.459 "dhgroup": "null" 00:21:23.459 } 00:21:23.459 } 00:21:23.459 ]' 00:21:23.459 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.459 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.459 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.459 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.460 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.460 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.460 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.460 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.719 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:23.719 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.287 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.546 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.805 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.805 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.064 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.065 { 00:21:25.065 "cntlid": 103, 00:21:25.065 "qid": 0, 00:21:25.065 "state": "enabled", 00:21:25.065 "thread": "nvmf_tgt_poll_group_000", 00:21:25.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.065 "listen_address": { 00:21:25.065 "trtype": "TCP", 00:21:25.065 "adrfam": "IPv4", 00:21:25.065 "traddr": "10.0.0.2", 00:21:25.065 "trsvcid": "4420" 00:21:25.065 }, 00:21:25.065 "peer_address": { 00:21:25.065 "trtype": "TCP", 00:21:25.065 "adrfam": "IPv4", 00:21:25.065 "traddr": "10.0.0.1", 00:21:25.065 "trsvcid": "44974" 00:21:25.065 }, 00:21:25.065 "auth": { 00:21:25.065 "state": "completed", 00:21:25.065 "digest": "sha512", 00:21:25.065 "dhgroup": "null" 00:21:25.065 } 00:21:25.065 } 00:21:25.065 ]' 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.065 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.324 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:25.324 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:25.892 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.893 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.893 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.893 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.893 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.151 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.151 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.151 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.151 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.151 00:21:26.410 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.410 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.410 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.410 { 00:21:26.410 "cntlid": 105, 00:21:26.410 "qid": 0, 00:21:26.410 "state": "enabled", 00:21:26.410 "thread": "nvmf_tgt_poll_group_000", 00:21:26.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.410 "listen_address": { 00:21:26.410 "trtype": "TCP", 00:21:26.410 "adrfam": "IPv4", 00:21:26.410 "traddr": "10.0.0.2", 00:21:26.410 "trsvcid": "4420" 00:21:26.410 }, 00:21:26.410 "peer_address": { 00:21:26.410 "trtype": "TCP", 00:21:26.410 "adrfam": "IPv4", 00:21:26.410 "traddr": "10.0.0.1", 00:21:26.410 "trsvcid": "44998" 00:21:26.410 }, 00:21:26.410 "auth": { 00:21:26.410 "state": "completed", 00:21:26.410 "digest": "sha512", 00:21:26.410 "dhgroup": "ffdhe2048" 00:21:26.410 } 00:21:26.410 } 00:21:26.410 ]' 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.410 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.669 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.669 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.669 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.669 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.669 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.928 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:26.929 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.502 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.502 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.762 00:21:27.762 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.762 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.762 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.021 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.021 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.021 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.021 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.021 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.021 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.021 { 00:21:28.021 "cntlid": 107, 00:21:28.021 "qid": 0, 00:21:28.021 "state": "enabled", 00:21:28.021 "thread": "nvmf_tgt_poll_group_000", 00:21:28.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.021 "listen_address": { 00:21:28.021 "trtype": "TCP", 00:21:28.021 "adrfam": "IPv4", 00:21:28.021 "traddr": "10.0.0.2", 00:21:28.021 "trsvcid": "4420" 00:21:28.021 }, 00:21:28.021 "peer_address": { 00:21:28.021 "trtype": "TCP", 00:21:28.021 "adrfam": "IPv4", 00:21:28.021 "traddr": "10.0.0.1", 00:21:28.021 "trsvcid": "45014" 00:21:28.021 }, 00:21:28.021 "auth": { 00:21:28.021 "state": "completed", 00:21:28.021 "digest": "sha512", 00:21:28.021 "dhgroup": "ffdhe2048" 00:21:28.021 } 00:21:28.022 } 00:21:28.022 ]' 00:21:28.022 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.022 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.022 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.022 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.022 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.281 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.281 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.281 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.281 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:28.281 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.848 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.107 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.108 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.108 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.367 00:21:29.367 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.367 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.367 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.626 { 00:21:29.626 "cntlid": 109, 00:21:29.626 "qid": 0, 00:21:29.626 "state": "enabled", 00:21:29.626 "thread": "nvmf_tgt_poll_group_000", 00:21:29.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.626 "listen_address": { 00:21:29.626 "trtype": "TCP", 00:21:29.626 "adrfam": "IPv4", 00:21:29.626 "traddr": "10.0.0.2", 00:21:29.626 "trsvcid": "4420" 00:21:29.626 }, 00:21:29.626 "peer_address": { 00:21:29.626 "trtype": "TCP", 00:21:29.626 "adrfam": "IPv4", 00:21:29.626 "traddr": "10.0.0.1", 00:21:29.626 "trsvcid": "45026" 00:21:29.626 }, 00:21:29.626 "auth": { 00:21:29.626 "state": "completed", 00:21:29.626 "digest": "sha512", 00:21:29.626 "dhgroup": "ffdhe2048" 00:21:29.626 } 00:21:29.626 } 00:21:29.626 ]' 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.626 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.885 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:29.885 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.453 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.713 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.973 00:21:30.973 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.973 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.973 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.232 { 00:21:31.232 "cntlid": 111, 00:21:31.232 "qid": 0, 00:21:31.232 "state": "enabled", 00:21:31.232 "thread": "nvmf_tgt_poll_group_000", 00:21:31.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.232 "listen_address": { 00:21:31.232 "trtype": "TCP", 00:21:31.232 "adrfam": "IPv4", 00:21:31.232 "traddr": "10.0.0.2", 00:21:31.232 "trsvcid": "4420" 00:21:31.232 }, 00:21:31.232 "peer_address": { 00:21:31.232 "trtype": "TCP", 00:21:31.232 "adrfam": "IPv4", 00:21:31.232 "traddr": "10.0.0.1", 00:21:31.232 "trsvcid": "45054" 00:21:31.232 }, 00:21:31.232 "auth": { 00:21:31.232 "state": "completed", 00:21:31.232 "digest": "sha512", 00:21:31.232 "dhgroup": "ffdhe2048" 00:21:31.232 } 00:21:31.232 } 00:21:31.232 ]' 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.232 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.491 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:31.491 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.058 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.318 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.575 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.575 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.833 { 00:21:32.833 "cntlid": 113, 00:21:32.833 "qid": 0, 00:21:32.833 "state": "enabled", 00:21:32.833 "thread": "nvmf_tgt_poll_group_000", 00:21:32.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.833 "listen_address": { 00:21:32.833 "trtype": "TCP", 00:21:32.833 "adrfam": "IPv4", 00:21:32.833 "traddr": "10.0.0.2", 00:21:32.833 "trsvcid": "4420" 00:21:32.833 }, 00:21:32.833 "peer_address": { 00:21:32.833 "trtype": "TCP", 00:21:32.833 "adrfam": "IPv4", 00:21:32.833 "traddr": "10.0.0.1", 00:21:32.833 "trsvcid": "33872" 00:21:32.833 }, 00:21:32.833 "auth": { 00:21:32.833 "state": "completed", 00:21:32.833 "digest": "sha512", 00:21:32.833 "dhgroup": "ffdhe3072" 00:21:32.833 } 00:21:32.833 } 00:21:32.833 ]' 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.833 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.091 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:33.091 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.660 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.919 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.178 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.178 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.437 { 00:21:34.437 "cntlid": 115, 00:21:34.437 "qid": 0, 00:21:34.437 "state": "enabled", 00:21:34.437 "thread": "nvmf_tgt_poll_group_000", 00:21:34.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.437 "listen_address": { 00:21:34.437 "trtype": "TCP", 00:21:34.437 "adrfam": "IPv4", 00:21:34.437 "traddr": "10.0.0.2", 00:21:34.437 "trsvcid": "4420" 00:21:34.437 }, 00:21:34.437 "peer_address": { 00:21:34.437 "trtype": "TCP", 00:21:34.437 "adrfam": "IPv4", 00:21:34.437 "traddr": "10.0.0.1", 00:21:34.437 "trsvcid": "33912" 00:21:34.437 }, 00:21:34.437 "auth": { 00:21:34.437 "state": "completed", 00:21:34.437 "digest": "sha512", 00:21:34.437 "dhgroup": "ffdhe3072" 00:21:34.437 } 00:21:34.437 } 00:21:34.437 ]' 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.437 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.696 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:34.696 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.264 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.523 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.782 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.782 { 00:21:35.782 "cntlid": 117, 00:21:35.782 "qid": 0, 00:21:35.782 "state": "enabled", 00:21:35.782 "thread": "nvmf_tgt_poll_group_000", 00:21:35.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.782 "listen_address": { 00:21:35.782 "trtype": "TCP", 00:21:35.782 "adrfam": "IPv4", 00:21:35.782 "traddr": "10.0.0.2", 00:21:35.782 "trsvcid": "4420" 00:21:35.782 }, 00:21:35.782 "peer_address": { 00:21:35.782 "trtype": "TCP", 00:21:35.782 "adrfam": "IPv4", 00:21:35.782 "traddr": "10.0.0.1", 00:21:35.782 "trsvcid": "33922" 00:21:35.782 }, 00:21:35.782 "auth": { 00:21:35.782 "state": "completed", 00:21:35.782 "digest": "sha512", 00:21:35.782 "dhgroup": "ffdhe3072" 00:21:35.782 } 00:21:35.782 } 00:21:35.782 ]' 00:21:35.782 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.041 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.300 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:36.300 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.869 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.128 00:21:37.128 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.128 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.128 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.387 { 00:21:37.387 "cntlid": 119, 00:21:37.387 "qid": 0, 00:21:37.387 "state": "enabled", 00:21:37.387 "thread": "nvmf_tgt_poll_group_000", 00:21:37.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.387 "listen_address": { 00:21:37.387 "trtype": "TCP", 00:21:37.387 "adrfam": "IPv4", 00:21:37.387 "traddr": "10.0.0.2", 00:21:37.387 "trsvcid": "4420" 00:21:37.387 }, 00:21:37.387 "peer_address": { 00:21:37.387 "trtype": "TCP", 00:21:37.387 "adrfam": "IPv4", 00:21:37.387 "traddr": "10.0.0.1", 00:21:37.387 "trsvcid": "33952" 00:21:37.387 }, 00:21:37.387 "auth": { 00:21:37.387 "state": "completed", 00:21:37.387 "digest": "sha512", 00:21:37.387 "dhgroup": "ffdhe3072" 00:21:37.387 } 00:21:37.387 } 00:21:37.387 ]' 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.387 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.646 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.646 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.646 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.646 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.646 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.904 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:37.905 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.473 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.473 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.474 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.734 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.993 { 00:21:38.993 "cntlid": 121, 00:21:38.993 "qid": 0, 00:21:38.993 "state": "enabled", 00:21:38.993 "thread": "nvmf_tgt_poll_group_000", 00:21:38.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:38.993 "listen_address": { 00:21:38.993 "trtype": "TCP", 00:21:38.993 "adrfam": "IPv4", 00:21:38.993 "traddr": "10.0.0.2", 00:21:38.993 "trsvcid": "4420" 00:21:38.993 }, 00:21:38.993 "peer_address": { 00:21:38.993 "trtype": "TCP", 00:21:38.993 "adrfam": "IPv4", 00:21:38.993 "traddr": "10.0.0.1", 00:21:38.993 "trsvcid": "33990" 00:21:38.993 }, 00:21:38.993 "auth": { 00:21:38.993 "state": "completed", 00:21:38.993 "digest": "sha512", 00:21:38.993 "dhgroup": "ffdhe4096" 00:21:38.993 } 00:21:38.993 } 00:21:38.993 ]' 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.993 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.252 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.252 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.252 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.252 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.252 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.511 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:39.511 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.080 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.340 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.599 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.599 { 00:21:40.599 "cntlid": 123, 00:21:40.599 "qid": 0, 00:21:40.599 "state": "enabled", 00:21:40.599 "thread": "nvmf_tgt_poll_group_000", 00:21:40.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.599 "listen_address": { 00:21:40.599 "trtype": "TCP", 00:21:40.599 "adrfam": "IPv4", 00:21:40.599 "traddr": "10.0.0.2", 00:21:40.599 "trsvcid": "4420" 00:21:40.599 }, 00:21:40.599 "peer_address": { 00:21:40.599 "trtype": "TCP", 00:21:40.599 "adrfam": "IPv4", 00:21:40.599 "traddr": "10.0.0.1", 00:21:40.599 "trsvcid": "34016" 00:21:40.599 }, 00:21:40.599 "auth": { 00:21:40.599 "state": "completed", 00:21:40.599 "digest": "sha512", 00:21:40.599 "dhgroup": "ffdhe4096" 00:21:40.599 } 00:21:40.599 } 00:21:40.599 ]' 00:21:40.599 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.859 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.117 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:41.117 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.686 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.945 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.945 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.945 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.945 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.204 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.204 { 00:21:42.204 "cntlid": 125, 00:21:42.204 "qid": 0, 00:21:42.204 "state": "enabled", 00:21:42.204 "thread": "nvmf_tgt_poll_group_000", 00:21:42.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.204 "listen_address": { 00:21:42.204 "trtype": "TCP", 00:21:42.204 "adrfam": "IPv4", 00:21:42.204 "traddr": "10.0.0.2", 00:21:42.204 "trsvcid": "4420" 00:21:42.204 }, 00:21:42.204 "peer_address": { 00:21:42.204 "trtype": "TCP", 00:21:42.204 "adrfam": "IPv4", 00:21:42.204 "traddr": "10.0.0.1", 00:21:42.204 "trsvcid": "44238" 00:21:42.204 }, 00:21:42.204 "auth": { 00:21:42.204 "state": "completed", 00:21:42.204 "digest": "sha512", 00:21:42.204 "dhgroup": "ffdhe4096" 00:21:42.204 } 00:21:42.204 } 00:21:42.204 ]' 00:21:42.204 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.462 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.462 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.462 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.462 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.463 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.463 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.463 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.721 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:42.721 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.289 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.548 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.808 00:21:43.808 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.808 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.808 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.068 { 00:21:44.068 "cntlid": 127, 00:21:44.068 "qid": 0, 00:21:44.068 "state": "enabled", 00:21:44.068 "thread": "nvmf_tgt_poll_group_000", 00:21:44.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.068 "listen_address": { 00:21:44.068 "trtype": "TCP", 00:21:44.068 "adrfam": "IPv4", 00:21:44.068 "traddr": "10.0.0.2", 00:21:44.068 "trsvcid": "4420" 00:21:44.068 }, 00:21:44.068 "peer_address": { 00:21:44.068 "trtype": "TCP", 00:21:44.068 "adrfam": "IPv4", 00:21:44.068 "traddr": "10.0.0.1", 00:21:44.068 "trsvcid": "44262" 00:21:44.068 }, 00:21:44.068 "auth": { 00:21:44.068 "state": "completed", 00:21:44.068 "digest": "sha512", 00:21:44.068 "dhgroup": "ffdhe4096" 00:21:44.068 } 00:21:44.068 } 00:21:44.068 ]' 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.068 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.327 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:44.327 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.895 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.152 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.410 00:21:45.410 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.410 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.410 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.669 { 00:21:45.669 "cntlid": 129, 00:21:45.669 "qid": 0, 00:21:45.669 "state": "enabled", 00:21:45.669 "thread": "nvmf_tgt_poll_group_000", 00:21:45.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.669 "listen_address": { 00:21:45.669 "trtype": "TCP", 00:21:45.669 "adrfam": "IPv4", 00:21:45.669 "traddr": "10.0.0.2", 00:21:45.669 "trsvcid": "4420" 00:21:45.669 }, 00:21:45.669 "peer_address": { 00:21:45.669 "trtype": "TCP", 00:21:45.669 "adrfam": "IPv4", 00:21:45.669 "traddr": "10.0.0.1", 00:21:45.669 "trsvcid": "44288" 00:21:45.669 }, 00:21:45.669 "auth": { 00:21:45.669 "state": "completed", 00:21:45.669 "digest": "sha512", 00:21:45.669 "dhgroup": "ffdhe6144" 00:21:45.669 } 00:21:45.669 } 00:21:45.669 ]' 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.669 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.928 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:45.928 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.495 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.754 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.013 00:21:47.013 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.013 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.013 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.272 { 00:21:47.272 "cntlid": 131, 00:21:47.272 "qid": 0, 00:21:47.272 "state": "enabled", 00:21:47.272 "thread": "nvmf_tgt_poll_group_000", 00:21:47.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.272 "listen_address": { 00:21:47.272 "trtype": "TCP", 00:21:47.272 "adrfam": "IPv4", 00:21:47.272 "traddr": "10.0.0.2", 00:21:47.272 "trsvcid": "4420" 00:21:47.272 }, 00:21:47.272 "peer_address": { 00:21:47.272 "trtype": "TCP", 00:21:47.272 "adrfam": "IPv4", 00:21:47.272 "traddr": "10.0.0.1", 00:21:47.272 "trsvcid": "44324" 00:21:47.272 }, 00:21:47.272 "auth": { 00:21:47.272 "state": "completed", 00:21:47.272 "digest": "sha512", 00:21:47.272 "dhgroup": "ffdhe6144" 00:21:47.272 } 00:21:47.272 } 00:21:47.272 ]' 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.272 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.273 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.273 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.273 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.532 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.532 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.532 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.532 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:47.532 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.099 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.358 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.616 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.874 { 00:21:48.874 "cntlid": 133, 00:21:48.874 "qid": 0, 00:21:48.874 "state": "enabled", 00:21:48.874 "thread": "nvmf_tgt_poll_group_000", 00:21:48.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.874 "listen_address": { 00:21:48.874 "trtype": "TCP", 00:21:48.874 "adrfam": "IPv4", 00:21:48.874 "traddr": "10.0.0.2", 00:21:48.874 "trsvcid": "4420" 00:21:48.874 }, 00:21:48.874 "peer_address": { 00:21:48.874 "trtype": "TCP", 00:21:48.874 "adrfam": "IPv4", 00:21:48.874 "traddr": "10.0.0.1", 00:21:48.874 "trsvcid": "44360" 00:21:48.874 }, 00:21:48.874 "auth": { 00:21:48.874 "state": "completed", 00:21:48.874 "digest": "sha512", 00:21:48.874 "dhgroup": "ffdhe6144" 00:21:48.874 } 00:21:48.874 } 00:21:48.874 ]' 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.874 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.133 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.133 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.133 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.133 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.133 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.392 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:49.392 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:49.959 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.960 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.527 00:21:50.527 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.527 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.527 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.527 { 00:21:50.527 "cntlid": 135, 00:21:50.527 "qid": 0, 00:21:50.527 "state": "enabled", 00:21:50.527 "thread": "nvmf_tgt_poll_group_000", 00:21:50.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.527 "listen_address": { 00:21:50.527 "trtype": "TCP", 00:21:50.527 "adrfam": "IPv4", 00:21:50.527 "traddr": "10.0.0.2", 00:21:50.527 "trsvcid": "4420" 00:21:50.527 }, 00:21:50.527 "peer_address": { 00:21:50.527 "trtype": "TCP", 00:21:50.527 "adrfam": "IPv4", 00:21:50.527 "traddr": "10.0.0.1", 00:21:50.527 "trsvcid": "44388" 00:21:50.527 }, 00:21:50.527 "auth": { 00:21:50.527 "state": "completed", 00:21:50.527 "digest": "sha512", 00:21:50.527 "dhgroup": "ffdhe6144" 00:21:50.527 } 00:21:50.527 } 00:21:50.527 ]' 00:21:50.527 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.786 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.044 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:51.044 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.871 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.871 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.872 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.872 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.190 00:21:52.190 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.190 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.190 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.449 { 00:21:52.449 "cntlid": 137, 00:21:52.449 "qid": 0, 00:21:52.449 "state": "enabled", 00:21:52.449 "thread": "nvmf_tgt_poll_group_000", 00:21:52.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.449 "listen_address": { 00:21:52.449 "trtype": "TCP", 00:21:52.449 "adrfam": "IPv4", 00:21:52.449 "traddr": "10.0.0.2", 00:21:52.449 "trsvcid": "4420" 00:21:52.449 }, 00:21:52.449 "peer_address": { 00:21:52.449 "trtype": "TCP", 00:21:52.449 "adrfam": "IPv4", 00:21:52.449 "traddr": "10.0.0.1", 00:21:52.449 "trsvcid": "42568" 00:21:52.449 }, 00:21:52.449 "auth": { 00:21:52.449 "state": "completed", 00:21:52.449 "digest": "sha512", 00:21:52.449 "dhgroup": "ffdhe8192" 00:21:52.449 } 00:21:52.449 } 00:21:52.449 ]' 00:21:52.449 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.449 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.708 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:52.708 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.277 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.536 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.104 00:21:54.104 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.104 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.104 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.104 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.363 { 00:21:54.363 "cntlid": 139, 00:21:54.363 "qid": 0, 00:21:54.363 "state": "enabled", 00:21:54.363 "thread": "nvmf_tgt_poll_group_000", 00:21:54.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.363 "listen_address": { 00:21:54.363 "trtype": "TCP", 00:21:54.363 "adrfam": "IPv4", 00:21:54.363 "traddr": "10.0.0.2", 00:21:54.363 "trsvcid": "4420" 00:21:54.363 }, 00:21:54.363 "peer_address": { 00:21:54.363 "trtype": "TCP", 00:21:54.363 "adrfam": "IPv4", 00:21:54.363 "traddr": "10.0.0.1", 00:21:54.363 "trsvcid": "42606" 00:21:54.363 }, 00:21:54.363 "auth": { 00:21:54.363 "state": "completed", 00:21:54.363 "digest": "sha512", 00:21:54.363 "dhgroup": "ffdhe8192" 00:21:54.363 } 00:21:54.363 } 00:21:54.363 ]' 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.363 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.622 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:54.622 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: --dhchap-ctrl-secret DHHC-1:02:NDY2NjNlMmMyNTRmNmZhNGQ0N2Y2MDA1ZWQ3NjZjM2M1MGRjNjJhMmJiMGNhMjQzJPsDiQ==: 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.191 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.451 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.710 00:21:55.710 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.710 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.710 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.969 { 00:21:55.969 "cntlid": 141, 00:21:55.969 "qid": 0, 00:21:55.969 "state": "enabled", 00:21:55.969 "thread": "nvmf_tgt_poll_group_000", 00:21:55.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:55.969 "listen_address": { 00:21:55.969 "trtype": "TCP", 00:21:55.969 "adrfam": "IPv4", 00:21:55.969 "traddr": "10.0.0.2", 00:21:55.969 "trsvcid": "4420" 00:21:55.969 }, 00:21:55.969 "peer_address": { 00:21:55.969 "trtype": "TCP", 00:21:55.969 "adrfam": "IPv4", 00:21:55.969 "traddr": "10.0.0.1", 00:21:55.969 "trsvcid": "42626" 00:21:55.969 }, 00:21:55.969 "auth": { 00:21:55.969 "state": "completed", 00:21:55.969 "digest": "sha512", 00:21:55.969 "dhgroup": "ffdhe8192" 00:21:55.969 } 00:21:55.969 } 00:21:55.969 ]' 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.969 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.228 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.228 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.228 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.228 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.228 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.228 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.487 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:56.487 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:01:MThhZDU0NjZkMDQ1ZWY4NTAwNGE4Y2U0ZWE0MGI2NDjIq3/8: 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.055 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.624 00:21:57.624 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.624 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.624 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.882 { 00:21:57.882 "cntlid": 143, 00:21:57.882 "qid": 0, 00:21:57.882 "state": "enabled", 00:21:57.882 "thread": "nvmf_tgt_poll_group_000", 00:21:57.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:57.882 "listen_address": { 00:21:57.882 "trtype": "TCP", 00:21:57.882 "adrfam": "IPv4", 00:21:57.882 "traddr": "10.0.0.2", 00:21:57.882 "trsvcid": "4420" 00:21:57.882 }, 00:21:57.882 "peer_address": { 00:21:57.882 "trtype": "TCP", 00:21:57.882 "adrfam": "IPv4", 00:21:57.882 "traddr": "10.0.0.1", 00:21:57.882 "trsvcid": "42644" 00:21:57.882 }, 00:21:57.882 "auth": { 00:21:57.882 "state": "completed", 00:21:57.882 "digest": "sha512", 00:21:57.882 "dhgroup": "ffdhe8192" 00:21:57.882 } 00:21:57.882 } 00:21:57.882 ]' 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.882 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.140 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:58.140 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.707 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.535 00:21:59.535 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.535 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.535 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.535 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.535 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.535 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.535 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.535 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.535 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.535 { 00:21:59.535 "cntlid": 145, 00:21:59.535 "qid": 0, 00:21:59.535 "state": "enabled", 00:21:59.535 "thread": "nvmf_tgt_poll_group_000", 00:21:59.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:59.535 "listen_address": { 00:21:59.535 "trtype": "TCP", 00:21:59.535 "adrfam": "IPv4", 00:21:59.535 "traddr": "10.0.0.2", 00:21:59.535 "trsvcid": "4420" 00:21:59.535 }, 00:21:59.536 "peer_address": { 00:21:59.536 "trtype": "TCP", 00:21:59.536 "adrfam": "IPv4", 00:21:59.536 "traddr": "10.0.0.1", 00:21:59.536 "trsvcid": "42680" 00:21:59.536 }, 00:21:59.536 "auth": { 00:21:59.536 "state": "completed", 00:21:59.536 "digest": "sha512", 00:21:59.536 "dhgroup": "ffdhe8192" 00:21:59.536 } 00:21:59.536 } 00:21:59.536 ]' 00:21:59.536 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.794 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.053 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:22:00.053 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YmIxODNkZWZkOGFmZjVkZmY1MmU3M2IxOTIwNzBmNmY5YjkzM2M1ZDFkZTI5ZGQ4ND2/sA==: --dhchap-ctrl-secret DHHC-1:03:MjY1MWQzZGE2ZjNjOWVlNTJjOTZkOGVlMzcwY2NkZmZmOGVmNTdkZWE1ZDU4NTZiYzAzNTFlMjAzZmFjNjA2NBiVVgc=: 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:00.621 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:00.881 request: 00:22:00.881 { 00:22:00.881 "name": "nvme0", 00:22:00.881 "trtype": "tcp", 00:22:00.881 "traddr": "10.0.0.2", 00:22:00.881 "adrfam": "ipv4", 00:22:00.881 "trsvcid": "4420", 00:22:00.881 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:00.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:00.881 "prchk_reftag": false, 00:22:00.881 "prchk_guard": false, 00:22:00.881 "hdgst": false, 00:22:00.881 "ddgst": false, 00:22:00.881 "dhchap_key": "key2", 00:22:00.881 "allow_unrecognized_csi": false, 00:22:00.881 "method": "bdev_nvme_attach_controller", 00:22:00.881 "req_id": 1 00:22:00.881 } 00:22:00.881 Got JSON-RPC error response 00:22:00.881 response: 00:22:00.881 { 00:22:00.881 "code": -5, 00:22:00.881 "message": "Input/output error" 00:22:00.881 } 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.881 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:00.882 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.450 request: 00:22:01.450 { 00:22:01.450 "name": "nvme0", 00:22:01.450 "trtype": "tcp", 00:22:01.450 "traddr": "10.0.0.2", 00:22:01.450 "adrfam": "ipv4", 00:22:01.450 "trsvcid": "4420", 00:22:01.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:01.450 "prchk_reftag": false, 00:22:01.450 "prchk_guard": false, 00:22:01.450 "hdgst": false, 00:22:01.450 "ddgst": false, 00:22:01.450 "dhchap_key": "key1", 00:22:01.450 "dhchap_ctrlr_key": "ckey2", 00:22:01.450 "allow_unrecognized_csi": false, 00:22:01.450 "method": "bdev_nvme_attach_controller", 00:22:01.450 "req_id": 1 00:22:01.450 } 00:22:01.450 Got JSON-RPC error response 00:22:01.450 response: 00:22:01.450 { 00:22:01.450 "code": -5, 00:22:01.450 "message": "Input/output error" 00:22:01.450 } 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.450 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.019 request: 00:22:02.019 { 00:22:02.019 "name": "nvme0", 00:22:02.019 "trtype": "tcp", 00:22:02.019 "traddr": "10.0.0.2", 00:22:02.019 "adrfam": "ipv4", 00:22:02.019 "trsvcid": "4420", 00:22:02.019 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:02.019 "prchk_reftag": false, 00:22:02.019 "prchk_guard": false, 00:22:02.019 "hdgst": false, 00:22:02.019 "ddgst": false, 00:22:02.019 "dhchap_key": "key1", 00:22:02.019 "dhchap_ctrlr_key": "ckey1", 00:22:02.019 "allow_unrecognized_csi": false, 00:22:02.019 "method": "bdev_nvme_attach_controller", 00:22:02.019 "req_id": 1 00:22:02.019 } 00:22:02.019 Got JSON-RPC error response 00:22:02.019 response: 00:22:02.019 { 00:22:02.019 "code": -5, 00:22:02.019 "message": "Input/output error" 00:22:02.019 } 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 314196 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314196 ']' 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314196 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314196 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314196' 00:22:02.019 killing process with pid 314196 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314196 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314196 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=336322 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 336322 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336322 ']' 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.019 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 336322 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336322 ']' 00:22:02.278 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.279 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.279 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.279 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.279 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.537 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.537 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.537 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:02.537 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.537 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.537 null0 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h0t 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Wjo ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wjo 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YiK 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.0Hg ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Hg 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yW0 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.796 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Ezd ]] 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ezd 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Plq 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.797 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.366 nvme0n1 00:22:03.366 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.366 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.366 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.625 { 00:22:03.625 "cntlid": 1, 00:22:03.625 "qid": 0, 00:22:03.625 "state": "enabled", 00:22:03.625 "thread": "nvmf_tgt_poll_group_000", 00:22:03.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:03.625 "listen_address": { 00:22:03.625 "trtype": "TCP", 00:22:03.625 "adrfam": "IPv4", 00:22:03.625 "traddr": "10.0.0.2", 00:22:03.625 "trsvcid": "4420" 00:22:03.625 }, 00:22:03.625 "peer_address": { 00:22:03.625 "trtype": "TCP", 00:22:03.625 "adrfam": "IPv4", 00:22:03.625 "traddr": "10.0.0.1", 00:22:03.625 "trsvcid": "53236" 00:22:03.625 }, 00:22:03.625 "auth": { 00:22:03.625 "state": "completed", 00:22:03.625 "digest": "sha512", 00:22:03.625 "dhgroup": "ffdhe8192" 00:22:03.625 } 00:22:03.625 } 00:22:03.625 ]' 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.625 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.883 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.884 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.884 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.884 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.884 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.145 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:22:04.145 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:22:04.712 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:04.713 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.973 request: 00:22:04.973 { 00:22:04.973 "name": "nvme0", 00:22:04.973 "trtype": "tcp", 00:22:04.973 "traddr": "10.0.0.2", 00:22:04.973 "adrfam": "ipv4", 00:22:04.973 "trsvcid": "4420", 00:22:04.973 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:04.973 "prchk_reftag": false, 00:22:04.973 "prchk_guard": false, 00:22:04.973 "hdgst": false, 00:22:04.973 "ddgst": false, 00:22:04.973 "dhchap_key": "key3", 00:22:04.973 "allow_unrecognized_csi": false, 00:22:04.973 "method": "bdev_nvme_attach_controller", 00:22:04.973 "req_id": 1 00:22:04.973 } 00:22:04.973 Got JSON-RPC error response 00:22:04.973 response: 00:22:04.973 { 00:22:04.973 "code": -5, 00:22:04.973 "message": "Input/output error" 00:22:04.973 } 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:04.973 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.232 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.233 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.233 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.233 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.492 request: 00:22:05.492 { 00:22:05.492 "name": "nvme0", 00:22:05.492 "trtype": "tcp", 00:22:05.492 "traddr": "10.0.0.2", 00:22:05.492 "adrfam": "ipv4", 00:22:05.492 "trsvcid": "4420", 00:22:05.492 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:05.492 "prchk_reftag": false, 00:22:05.492 "prchk_guard": false, 00:22:05.492 "hdgst": false, 00:22:05.492 "ddgst": false, 00:22:05.492 "dhchap_key": "key3", 00:22:05.492 "allow_unrecognized_csi": false, 00:22:05.492 "method": "bdev_nvme_attach_controller", 00:22:05.492 "req_id": 1 00:22:05.492 } 00:22:05.492 Got JSON-RPC error response 00:22:05.492 response: 00:22:05.492 { 00:22:05.492 "code": -5, 00:22:05.492 "message": "Input/output error" 00:22:05.492 } 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.492 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.751 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.010 request: 00:22:06.010 { 00:22:06.010 "name": "nvme0", 00:22:06.010 "trtype": "tcp", 00:22:06.010 "traddr": "10.0.0.2", 00:22:06.010 "adrfam": "ipv4", 00:22:06.010 "trsvcid": "4420", 00:22:06.010 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:06.010 "prchk_reftag": false, 00:22:06.010 "prchk_guard": false, 00:22:06.010 "hdgst": false, 00:22:06.010 "ddgst": false, 00:22:06.010 "dhchap_key": "key0", 00:22:06.010 "dhchap_ctrlr_key": "key1", 00:22:06.010 "allow_unrecognized_csi": false, 00:22:06.010 "method": "bdev_nvme_attach_controller", 00:22:06.010 "req_id": 1 00:22:06.010 } 00:22:06.010 Got JSON-RPC error response 00:22:06.010 response: 00:22:06.010 { 00:22:06.010 "code": -5, 00:22:06.010 "message": "Input/output error" 00:22:06.010 } 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.010 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.269 nvme0n1 00:22:06.269 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:06.269 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.269 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:06.528 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.528 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.528 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:06.787 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.355 nvme0n1 00:22:07.355 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:07.355 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:07.355 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.614 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:07.871 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.871 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:22:07.871 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjExOGVmZjdlYjgzN2QyMTYyYzY3NzkxNTFhODhjN2Y2Y2ZjNjhhOTA0OWZkN2Y5NjQ5ZjU1ZDc1NGI2YwM7G50=: 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.438 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.697 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.955 request: 00:22:08.955 { 00:22:08.955 "name": "nvme0", 00:22:08.955 "trtype": "tcp", 00:22:08.955 "traddr": "10.0.0.2", 00:22:08.955 "adrfam": "ipv4", 00:22:08.955 "trsvcid": "4420", 00:22:08.955 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:08.955 "prchk_reftag": false, 00:22:08.955 "prchk_guard": false, 00:22:08.955 "hdgst": false, 00:22:08.955 "ddgst": false, 00:22:08.955 "dhchap_key": "key1", 00:22:08.955 "allow_unrecognized_csi": false, 00:22:08.955 "method": "bdev_nvme_attach_controller", 00:22:08.955 "req_id": 1 00:22:08.955 } 00:22:08.955 Got JSON-RPC error response 00:22:08.955 response: 00:22:08.955 { 00:22:08.955 "code": -5, 00:22:08.955 "message": "Input/output error" 00:22:08.955 } 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.955 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.892 nvme0n1 00:22:09.892 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:09.892 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:09.892 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.892 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.892 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.892 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.151 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.410 nvme0n1 00:22:10.410 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:10.410 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:10.410 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.669 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.669 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.669 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: '' 2s 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: ]] 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDA4ZjhhZmRhNzAyMWExNDNiYjE2YzdhZWRjMjc2NWXW/lM8: 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:10.928 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: 2s 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: ]] 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGEyNDE5MmJlN2U3YzgwMzRhZDAxNjAyMjA5ZTRlZjQzNTRhNjg5YzE2MjMxMGJmdRJz9A==: 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:12.832 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.367 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.368 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.368 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.626 nvme0n1 00:22:15.626 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.626 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.626 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.885 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.886 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.886 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.144 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:16.144 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:16.144 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:16.403 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:16.662 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:16.662 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:16.662 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.921 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:17.179 request: 00:22:17.179 { 00:22:17.179 "name": "nvme0", 00:22:17.179 "dhchap_key": "key1", 00:22:17.179 "dhchap_ctrlr_key": "key3", 00:22:17.179 "method": "bdev_nvme_set_keys", 00:22:17.179 "req_id": 1 00:22:17.179 } 00:22:17.179 Got JSON-RPC error response 00:22:17.179 response: 00:22:17.179 { 00:22:17.179 "code": -13, 00:22:17.179 "message": "Permission denied" 00:22:17.179 } 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:17.179 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.438 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:17.438 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:18.375 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:18.375 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:18.375 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.634 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.571 nvme0n1 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.571 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.830 request: 00:22:19.830 { 00:22:19.830 "name": "nvme0", 00:22:19.830 "dhchap_key": "key2", 00:22:19.830 "dhchap_ctrlr_key": "key0", 00:22:19.830 "method": "bdev_nvme_set_keys", 00:22:19.830 "req_id": 1 00:22:19.830 } 00:22:19.830 Got JSON-RPC error response 00:22:19.830 response: 00:22:19.830 { 00:22:19.830 "code": -13, 00:22:19.831 "message": "Permission denied" 00:22:19.831 } 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:19.831 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.090 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:20.090 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:21.027 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:21.027 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:21.027 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 314285 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314285 ']' 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314285 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314285 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314285' 00:22:21.287 killing process with pid 314285 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314285 00:22:21.287 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314285 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.546 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.546 rmmod nvme_tcp 00:22:21.546 rmmod nvme_fabrics 00:22:21.546 rmmod nvme_keyring 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 336322 ']' 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 336322 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 336322 ']' 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 336322 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336322 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336322' 00:22:21.806 killing process with pid 336322 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 336322 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 336322 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.806 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.h0t /tmp/spdk.key-sha256.YiK /tmp/spdk.key-sha384.yW0 /tmp/spdk.key-sha512.Plq /tmp/spdk.key-sha512.Wjo /tmp/spdk.key-sha384.0Hg /tmp/spdk.key-sha256.Ezd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:24.342 00:22:24.342 real 2m34.454s 00:22:24.342 user 5m55.354s 00:22:24.342 sys 0m24.078s 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.342 ************************************ 00:22:24.342 END TEST nvmf_auth_target 00:22:24.342 ************************************ 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.342 ************************************ 00:22:24.342 START TEST nvmf_bdevio_no_huge 00:22:24.342 ************************************ 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:24.342 * Looking for test storage... 00:22:24.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.342 --rc genhtml_branch_coverage=1 00:22:24.342 --rc genhtml_function_coverage=1 00:22:24.342 --rc genhtml_legend=1 00:22:24.342 --rc geninfo_all_blocks=1 00:22:24.342 --rc geninfo_unexecuted_blocks=1 00:22:24.342 00:22:24.342 ' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.342 --rc genhtml_branch_coverage=1 00:22:24.342 --rc genhtml_function_coverage=1 00:22:24.342 --rc genhtml_legend=1 00:22:24.342 --rc geninfo_all_blocks=1 00:22:24.342 --rc geninfo_unexecuted_blocks=1 00:22:24.342 00:22:24.342 ' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.342 --rc genhtml_branch_coverage=1 00:22:24.342 --rc genhtml_function_coverage=1 00:22:24.342 --rc genhtml_legend=1 00:22:24.342 --rc geninfo_all_blocks=1 00:22:24.342 --rc geninfo_unexecuted_blocks=1 00:22:24.342 00:22:24.342 ' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.342 --rc genhtml_branch_coverage=1 00:22:24.342 --rc genhtml_function_coverage=1 00:22:24.342 --rc genhtml_legend=1 00:22:24.342 --rc geninfo_all_blocks=1 00:22:24.342 --rc geninfo_unexecuted_blocks=1 00:22:24.342 00:22:24.342 ' 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.342 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.343 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.915 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:30.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:30.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:30.916 Found net devices under 0000:af:00.0: cvl_0_0 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:30.916 Found net devices under 0000:af:00.1: cvl_0_1 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:22:30.916 00:22:30.916 --- 10.0.0.2 ping statistics --- 00:22:30.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.916 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:22:30.916 00:22:30.916 --- 10.0.0.1 ping statistics --- 00:22:30.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.916 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=343052 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 343052 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 343052 ']' 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.916 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 [2024-12-13 12:27:57.795023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:30.916 [2024-12-13 12:27:57.795067] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:30.916 [2024-12-13 12:27:57.876080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.916 [2024-12-13 12:27:57.911585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.916 [2024-12-13 12:27:57.911617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.916 [2024-12-13 12:27:57.911624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.916 [2024-12-13 12:27:57.911629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.916 [2024-12-13 12:27:57.911634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.916 [2024-12-13 12:27:57.912729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.916 [2024-12-13 12:27:57.912839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:30.916 [2024-12-13 12:27:57.912946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.917 [2024-12-13 12:27:57.912946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 [2024-12-13 12:27:58.052956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 Malloc0 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.917 [2024-12-13 12:27:58.097251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.917 { 00:22:30.917 "params": { 00:22:30.917 "name": "Nvme$subsystem", 00:22:30.917 "trtype": "$TEST_TRANSPORT", 00:22:30.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.917 "adrfam": "ipv4", 00:22:30.917 "trsvcid": "$NVMF_PORT", 00:22:30.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.917 "hdgst": ${hdgst:-false}, 00:22:30.917 "ddgst": ${ddgst:-false} 00:22:30.917 }, 00:22:30.917 "method": "bdev_nvme_attach_controller" 00:22:30.917 } 00:22:30.917 EOF 00:22:30.917 )") 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:30.917 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:30.917 "params": { 00:22:30.917 "name": "Nvme1", 00:22:30.917 "trtype": "tcp", 00:22:30.917 "traddr": "10.0.0.2", 00:22:30.917 "adrfam": "ipv4", 00:22:30.917 "trsvcid": "4420", 00:22:30.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.917 "hdgst": false, 00:22:30.917 "ddgst": false 00:22:30.917 }, 00:22:30.917 "method": "bdev_nvme_attach_controller" 00:22:30.917 }' 00:22:30.917 [2024-12-13 12:27:58.147059] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:30.917 [2024-12-13 12:27:58.147113] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid343086 ] 00:22:30.917 [2024-12-13 12:27:58.222889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:30.917 [2024-12-13 12:27:58.260395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.917 [2024-12-13 12:27:58.260504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.917 [2024-12-13 12:27:58.260504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.917 I/O targets: 00:22:30.917 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:30.917 00:22:30.917 00:22:30.917 CUnit - A unit testing framework for C - Version 2.1-3 00:22:30.917 http://cunit.sourceforge.net/ 00:22:30.917 00:22:30.917 00:22:30.917 Suite: bdevio tests on: Nvme1n1 00:22:30.917 Test: blockdev write read block ...passed 00:22:30.917 Test: blockdev write zeroes read block ...passed 00:22:30.917 Test: blockdev write zeroes read no split ...passed 00:22:30.917 Test: blockdev write zeroes read split ...passed 00:22:31.176 Test: blockdev write zeroes read split partial ...passed 00:22:31.176 Test: blockdev reset ...[2024-12-13 12:27:58.631531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:31.176 [2024-12-13 12:27:58.631595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135cea0 (9): Bad file descriptor 00:22:31.176 [2024-12-13 12:27:58.685326] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:31.176 passed 00:22:31.176 Test: blockdev write read 8 blocks ...passed 00:22:31.176 Test: blockdev write read size > 128k ...passed 00:22:31.176 Test: blockdev write read invalid size ...passed 00:22:31.176 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:31.176 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:31.176 Test: blockdev write read max offset ...passed 00:22:31.176 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:31.176 Test: blockdev writev readv 8 blocks ...passed 00:22:31.176 Test: blockdev writev readv 30 x 1block ...passed 00:22:31.176 Test: blockdev writev readv block ...passed 00:22:31.176 Test: blockdev writev readv size > 128k ...passed 00:22:31.176 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:31.177 Test: blockdev comparev and writev ...[2024-12-13 12:27:58.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.857584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.857599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.857606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.857833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.857844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.857855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.857863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.858092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.858103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.858114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.858121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.858357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.858368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.177 [2024-12-13 12:27:58.858379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.177 [2024-12-13 12:27:58.858387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.435 passed 00:22:31.436 Test: blockdev nvme passthru rw ...passed 00:22:31.436 Test: blockdev nvme passthru vendor specific ...[2024-12-13 12:27:58.941166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.436 [2024-12-13 12:27:58.941183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.436 [2024-12-13 12:27:58.941290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.436 [2024-12-13 12:27:58.941300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.436 [2024-12-13 12:27:58.941401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.436 [2024-12-13 12:27:58.941411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.436 [2024-12-13 12:27:58.941509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.436 [2024-12-13 12:27:58.941519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.436 passed 00:22:31.436 Test: blockdev nvme admin passthru ...passed 00:22:31.436 Test: blockdev copy ...passed 00:22:31.436 00:22:31.436 Run Summary: Type Total Ran Passed Failed Inactive 00:22:31.436 suites 1 1 n/a 0 0 00:22:31.436 tests 23 23 23 0 0 00:22:31.436 asserts 152 152 152 0 n/a 00:22:31.436 00:22:31.436 Elapsed time = 1.065 seconds 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:31.694 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.695 rmmod nvme_tcp 00:22:31.695 rmmod nvme_fabrics 00:22:31.695 rmmod nvme_keyring 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 343052 ']' 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 343052 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 343052 ']' 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 343052 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343052 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343052' 00:22:31.695 killing process with pid 343052 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 343052 00:22:31.695 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 343052 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.953 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.490 00:22:34.490 real 0m10.084s 00:22:34.490 user 0m10.383s 00:22:34.490 sys 0m5.259s 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.490 ************************************ 00:22:34.490 END TEST nvmf_bdevio_no_huge 00:22:34.490 ************************************ 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.490 ************************************ 00:22:34.490 START TEST nvmf_tls 00:22:34.490 ************************************ 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:34.490 * Looking for test storage... 00:22:34.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.490 --rc genhtml_branch_coverage=1 00:22:34.490 --rc genhtml_function_coverage=1 00:22:34.490 --rc genhtml_legend=1 00:22:34.490 --rc geninfo_all_blocks=1 00:22:34.490 --rc geninfo_unexecuted_blocks=1 00:22:34.490 00:22:34.490 ' 00:22:34.490 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.490 --rc genhtml_branch_coverage=1 00:22:34.490 --rc genhtml_function_coverage=1 00:22:34.490 --rc genhtml_legend=1 00:22:34.490 --rc geninfo_all_blocks=1 00:22:34.490 --rc geninfo_unexecuted_blocks=1 00:22:34.490 00:22:34.490 ' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.491 --rc genhtml_branch_coverage=1 00:22:34.491 --rc genhtml_function_coverage=1 00:22:34.491 --rc genhtml_legend=1 00:22:34.491 --rc geninfo_all_blocks=1 00:22:34.491 --rc geninfo_unexecuted_blocks=1 00:22:34.491 00:22:34.491 ' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.491 --rc genhtml_branch_coverage=1 00:22:34.491 --rc genhtml_function_coverage=1 00:22:34.491 --rc genhtml_legend=1 00:22:34.491 --rc geninfo_all_blocks=1 00:22:34.491 --rc geninfo_unexecuted_blocks=1 00:22:34.491 00:22:34.491 ' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.491 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:41.063 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:41.063 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.063 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:41.064 Found net devices under 0000:af:00.0: cvl_0_0 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:41.064 Found net devices under 0000:af:00.1: cvl_0_1 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:22:41.064 00:22:41.064 --- 10.0.0.2 ping statistics --- 00:22:41.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.064 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:22:41.064 00:22:41.064 --- 10.0.0.1 ping statistics --- 00:22:41.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.064 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=346759 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 346759 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346759 ']' 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.064 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.064 [2024-12-13 12:28:07.920151] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:41.064 [2024-12-13 12:28:07.920198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.064 [2024-12-13 12:28:07.999490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.064 [2024-12-13 12:28:08.021348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.065 [2024-12-13 12:28:08.021377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.065 [2024-12-13 12:28:08.021384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.065 [2024-12-13 12:28:08.021390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.065 [2024-12-13 12:28:08.021395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.065 [2024-12-13 12:28:08.021836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:41.065 true 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.065 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:41.324 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:41.324 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:41.324 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:41.582 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.582 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:41.582 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:41.582 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:41.582 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.582 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:41.842 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:41.842 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:41.842 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:42.101 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.101 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:42.360 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:42.360 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:42.360 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:42.360 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.360 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NUrtG8RbgP 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gYnfbRz2u8 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NUrtG8RbgP 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gYnfbRz2u8 00:22:42.619 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:42.878 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:43.137 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NUrtG8RbgP 00:22:43.137 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NUrtG8RbgP 00:22:43.137 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.395 [2024-12-13 12:28:10.895164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.395 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:43.654 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:43.654 [2024-12-13 12:28:11.268101] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:43.654 [2024-12-13 12:28:11.268300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.654 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:43.912 malloc0 00:22:43.912 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.171 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NUrtG8RbgP 00:22:44.171 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.431 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NUrtG8RbgP 00:22:56.662 Initializing NVMe Controllers 00:22:56.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.662 Initialization complete. Launching workers. 00:22:56.662 ======================================================== 00:22:56.662 Latency(us) 00:22:56.662 Device Information : IOPS MiB/s Average min max 00:22:56.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17053.40 66.61 3752.99 936.30 6929.58 00:22:56.662 ======================================================== 00:22:56.662 Total : 17053.40 66.61 3752.99 936.30 6929.58 00:22:56.662 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUrtG8RbgP 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NUrtG8RbgP 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=349239 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 349239 /var/tmp/bdevperf.sock 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349239 ']' 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.662 [2024-12-13 12:28:22.214775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:56.662 [2024-12-13 12:28:22.214838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349239 ] 00:22:56.662 [2024-12-13 12:28:22.288285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.662 [2024-12-13 12:28:22.310698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUrtG8RbgP 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.662 [2024-12-13 12:28:22.745631] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.662 TLSTESTn1 00:22:56.662 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.662 Running I/O for 10 seconds... 00:22:57.262 5027.00 IOPS, 19.64 MiB/s [2024-12-13T11:28:26.011Z] 5222.50 IOPS, 20.40 MiB/s [2024-12-13T11:28:27.025Z] 5146.00 IOPS, 20.10 MiB/s [2024-12-13T11:28:28.008Z] 5094.75 IOPS, 19.90 MiB/s [2024-12-13T11:28:29.028Z] 5064.60 IOPS, 19.78 MiB/s [2024-12-13T11:28:30.005Z] 5018.67 IOPS, 19.60 MiB/s [2024-12-13T11:28:30.941Z] 4978.29 IOPS, 19.45 MiB/s [2024-12-13T11:28:32.317Z] 5004.12 IOPS, 19.55 MiB/s [2024-12-13T11:28:33.254Z] 4930.89 IOPS, 19.26 MiB/s [2024-12-13T11:28:33.254Z] 4863.40 IOPS, 19.00 MiB/s 00:23:05.554 Latency(us) 00:23:05.554 [2024-12-13T11:28:33.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.554 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.554 Verification LBA range: start 0x0 length 0x2000 00:23:05.554 TLSTESTn1 : 10.03 4860.90 18.99 0.00 0.00 26280.18 6772.05 39696.09 00:23:05.554 [2024-12-13T11:28:33.254Z] =================================================================================================================== 00:23:05.554 [2024-12-13T11:28:33.254Z] Total : 4860.90 18.99 0.00 0.00 26280.18 6772.05 39696.09 00:23:05.554 { 00:23:05.554 "results": [ 00:23:05.554 { 00:23:05.554 "job": "TLSTESTn1", 00:23:05.554 "core_mask": "0x4", 00:23:05.554 "workload": "verify", 00:23:05.554 "status": "finished", 00:23:05.554 "verify_range": { 00:23:05.554 "start": 0, 00:23:05.554 "length": 8192 00:23:05.554 }, 00:23:05.554 "queue_depth": 128, 00:23:05.554 "io_size": 4096, 00:23:05.554 "runtime": 10.031468, 00:23:05.554 "iops": 4860.903708211002, 00:23:05.554 "mibps": 18.987905110199225, 00:23:05.554 "io_failed": 0, 00:23:05.554 "io_timeout": 0, 00:23:05.554 "avg_latency_us": 26280.175871531505, 00:23:05.554 "min_latency_us": 6772.053333333333, 00:23:05.554 "max_latency_us": 39696.09142857143 00:23:05.554 } 00:23:05.554 ], 00:23:05.554 "core_count": 1 00:23:05.554 } 00:23:05.554 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.554 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 349239 00:23:05.554 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349239 ']' 00:23:05.554 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349239 00:23:05.554 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349239 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349239' 00:23:05.554 killing process with pid 349239 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349239 00:23:05.554 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.554 00:23:05.554 Latency(us) 00:23:05.554 [2024-12-13T11:28:33.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.554 [2024-12-13T11:28:33.254Z] =================================================================================================================== 00:23:05.554 [2024-12-13T11:28:33.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349239 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gYnfbRz2u8 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gYnfbRz2u8 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gYnfbRz2u8 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gYnfbRz2u8 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=350899 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 350899 /var/tmp/bdevperf.sock 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 350899 ']' 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.554 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.554 [2024-12-13 12:28:33.251654] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:05.554 [2024-12-13 12:28:33.251703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350899 ] 00:23:05.814 [2024-12-13 12:28:33.324410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.814 [2024-12-13 12:28:33.346198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.814 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.814 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.814 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gYnfbRz2u8 00:23:06.073 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.332 [2024-12-13 12:28:33.809541] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.332 [2024-12-13 12:28:33.814125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:06.332 [2024-12-13 12:28:33.814749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1340 (107): Transport endpoint is not connected 00:23:06.332 [2024-12-13 12:28:33.815742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1340 (9): Bad file descriptor 00:23:06.332 [2024-12-13 12:28:33.816744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:06.332 [2024-12-13 12:28:33.816754] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:06.332 [2024-12-13 12:28:33.816761] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:06.332 [2024-12-13 12:28:33.816770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:06.332 request: 00:23:06.332 { 00:23:06.332 "name": "TLSTEST", 00:23:06.332 "trtype": "tcp", 00:23:06.332 "traddr": "10.0.0.2", 00:23:06.332 "adrfam": "ipv4", 00:23:06.332 "trsvcid": "4420", 00:23:06.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.332 "prchk_reftag": false, 00:23:06.332 "prchk_guard": false, 00:23:06.332 "hdgst": false, 00:23:06.332 "ddgst": false, 00:23:06.332 "psk": "key0", 00:23:06.332 "allow_unrecognized_csi": false, 00:23:06.332 "method": "bdev_nvme_attach_controller", 00:23:06.332 "req_id": 1 00:23:06.332 } 00:23:06.332 Got JSON-RPC error response 00:23:06.332 response: 00:23:06.332 { 00:23:06.332 "code": -5, 00:23:06.332 "message": "Input/output error" 00:23:06.332 } 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 350899 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 350899 ']' 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 350899 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350899 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350899' 00:23:06.332 killing process with pid 350899 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 350899 00:23:06.332 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.332 00:23:06.332 Latency(us) 00:23:06.332 [2024-12-13T11:28:34.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.332 [2024-12-13T11:28:34.032Z] =================================================================================================================== 00:23:06.332 [2024-12-13T11:28:34.032Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:06.332 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 350899 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NUrtG8RbgP 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NUrtG8RbgP 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NUrtG8RbgP 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NUrtG8RbgP 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351091 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351091 /var/tmp/bdevperf.sock 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351091 ']' 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.592 [2024-12-13 12:28:34.088480] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:06.592 [2024-12-13 12:28:34.088526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351091 ] 00:23:06.592 [2024-12-13 12:28:34.152837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.592 [2024-12-13 12:28:34.174039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.592 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUrtG8RbgP 00:23:06.851 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:07.111 [2024-12-13 12:28:34.628558] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.111 [2024-12-13 12:28:34.634449] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.111 [2024-12-13 12:28:34.634471] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.111 [2024-12-13 12:28:34.634495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.111 [2024-12-13 12:28:34.634768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb40340 (107): Transport endpoint is not connected 00:23:07.111 [2024-12-13 12:28:34.635763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb40340 (9): Bad file descriptor 00:23:07.111 [2024-12-13 12:28:34.636764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:07.111 [2024-12-13 12:28:34.636775] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.111 [2024-12-13 12:28:34.636786] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:07.111 [2024-12-13 12:28:34.636794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:07.111 request: 00:23:07.111 { 00:23:07.111 "name": "TLSTEST", 00:23:07.111 "trtype": "tcp", 00:23:07.111 "traddr": "10.0.0.2", 00:23:07.111 "adrfam": "ipv4", 00:23:07.111 "trsvcid": "4420", 00:23:07.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.111 "prchk_reftag": false, 00:23:07.111 "prchk_guard": false, 00:23:07.111 "hdgst": false, 00:23:07.111 "ddgst": false, 00:23:07.111 "psk": "key0", 00:23:07.111 "allow_unrecognized_csi": false, 00:23:07.111 "method": "bdev_nvme_attach_controller", 00:23:07.111 "req_id": 1 00:23:07.111 } 00:23:07.111 Got JSON-RPC error response 00:23:07.111 response: 00:23:07.111 { 00:23:07.111 "code": -5, 00:23:07.111 "message": "Input/output error" 00:23:07.111 } 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351091 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351091 ']' 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351091 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351091 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351091' 00:23:07.111 killing process with pid 351091 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351091 00:23:07.111 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.111 00:23:07.111 Latency(us) 00:23:07.111 [2024-12-13T11:28:34.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.111 [2024-12-13T11:28:34.811Z] =================================================================================================================== 00:23:07.111 [2024-12-13T11:28:34.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.111 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351091 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUrtG8RbgP 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUrtG8RbgP 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUrtG8RbgP 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NUrtG8RbgP 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351296 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351296 /var/tmp/bdevperf.sock 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351296 ']' 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.371 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.371 [2024-12-13 12:28:34.911853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:07.371 [2024-12-13 12:28:34.911904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351296 ] 00:23:07.371 [2024-12-13 12:28:34.984974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.371 [2024-12-13 12:28:35.006314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.628 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.628 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.628 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUrtG8RbgP 00:23:07.628 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.886 [2024-12-13 12:28:35.465304] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.886 [2024-12-13 12:28:35.476575] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:07.886 [2024-12-13 12:28:35.476596] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:07.887 [2024-12-13 12:28:35.476619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.887 [2024-12-13 12:28:35.477505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91340 (107): Transport endpoint is not connected 00:23:07.887 [2024-12-13 12:28:35.478499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91340 (9): Bad file descriptor 00:23:07.887 [2024-12-13 12:28:35.479501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:07.887 [2024-12-13 12:28:35.479512] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.887 [2024-12-13 12:28:35.479519] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:07.887 [2024-12-13 12:28:35.479529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:07.887 request: 00:23:07.887 { 00:23:07.887 "name": "TLSTEST", 00:23:07.887 "trtype": "tcp", 00:23:07.887 "traddr": "10.0.0.2", 00:23:07.887 "adrfam": "ipv4", 00:23:07.887 "trsvcid": "4420", 00:23:07.887 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.887 "prchk_reftag": false, 00:23:07.887 "prchk_guard": false, 00:23:07.887 "hdgst": false, 00:23:07.887 "ddgst": false, 00:23:07.887 "psk": "key0", 00:23:07.887 "allow_unrecognized_csi": false, 00:23:07.887 "method": "bdev_nvme_attach_controller", 00:23:07.887 "req_id": 1 00:23:07.887 } 00:23:07.887 Got JSON-RPC error response 00:23:07.887 response: 00:23:07.887 { 00:23:07.887 "code": -5, 00:23:07.887 "message": "Input/output error" 00:23:07.887 } 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351296 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351296 ']' 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351296 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351296 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351296' 00:23:07.887 killing process with pid 351296 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351296 00:23:07.887 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.887 00:23:07.887 Latency(us) 00:23:07.887 [2024-12-13T11:28:35.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.887 [2024-12-13T11:28:35.587Z] =================================================================================================================== 00:23:07.887 [2024-12-13T11:28:35.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.887 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351296 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351331 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351331 /var/tmp/bdevperf.sock 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351331 ']' 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.146 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.146 [2024-12-13 12:28:35.749178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:08.146 [2024-12-13 12:28:35.749225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351331 ] 00:23:08.146 [2024-12-13 12:28:35.823513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.405 [2024-12-13 12:28:35.846147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.405 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.405 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.405 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:08.665 [2024-12-13 12:28:36.116637] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:08.665 [2024-12-13 12:28:36.116671] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:08.665 request: 00:23:08.665 { 00:23:08.665 "name": "key0", 00:23:08.665 "path": "", 00:23:08.665 "method": "keyring_file_add_key", 00:23:08.665 "req_id": 1 00:23:08.665 } 00:23:08.665 Got JSON-RPC error response 00:23:08.665 response: 00:23:08.665 { 00:23:08.665 "code": -1, 00:23:08.665 "message": "Operation not permitted" 00:23:08.665 } 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.665 [2024-12-13 12:28:36.317247] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.665 [2024-12-13 12:28:36.317282] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:08.665 request: 00:23:08.665 { 00:23:08.665 "name": "TLSTEST", 00:23:08.665 "trtype": "tcp", 00:23:08.665 "traddr": "10.0.0.2", 00:23:08.665 "adrfam": "ipv4", 00:23:08.665 "trsvcid": "4420", 00:23:08.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.665 "prchk_reftag": false, 00:23:08.665 "prchk_guard": false, 00:23:08.665 "hdgst": false, 00:23:08.665 "ddgst": false, 00:23:08.665 "psk": "key0", 00:23:08.665 "allow_unrecognized_csi": false, 00:23:08.665 "method": "bdev_nvme_attach_controller", 00:23:08.665 "req_id": 1 00:23:08.665 } 00:23:08.665 Got JSON-RPC error response 00:23:08.665 response: 00:23:08.665 { 00:23:08.665 "code": -126, 00:23:08.665 "message": "Required key not available" 00:23:08.665 } 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351331 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351331 ']' 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351331 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.665 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351331 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351331' 00:23:08.924 killing process with pid 351331 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351331 00:23:08.924 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.924 00:23:08.924 Latency(us) 00:23:08.924 [2024-12-13T11:28:36.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.924 [2024-12-13T11:28:36.624Z] =================================================================================================================== 00:23:08.924 [2024-12-13T11:28:36.624Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351331 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 346759 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346759 ']' 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346759 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346759 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346759' 00:23:08.924 killing process with pid 346759 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346759 00:23:08.924 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346759 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.n1r6DmqXmb 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.n1r6DmqXmb 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=351573 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 351573 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351573 ']' 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.184 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.184 [2024-12-13 12:28:36.872988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:09.184 [2024-12-13 12:28:36.873035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.443 [2024-12-13 12:28:36.947457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.443 [2024-12-13 12:28:36.965480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.443 [2024-12-13 12:28:36.965513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.443 [2024-12-13 12:28:36.965521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.443 [2024-12-13 12:28:36.965527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.443 [2024-12-13 12:28:36.965532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.443 [2024-12-13 12:28:36.966048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.n1r6DmqXmb 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n1r6DmqXmb 00:23:09.443 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:09.702 [2024-12-13 12:28:37.267834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.703 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:09.962 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.962 [2024-12-13 12:28:37.660854] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.962 [2024-12-13 12:28:37.661044] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.221 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.221 malloc0 00:23:10.221 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.480 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:10.739 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1r6DmqXmb 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n1r6DmqXmb 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351822 00:23:10.998 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351822 /var/tmp/bdevperf.sock 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351822 ']' 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.999 [2024-12-13 12:28:38.501467] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:10.999 [2024-12-13 12:28:38.501513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351822 ] 00:23:10.999 [2024-12-13 12:28:38.574123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.999 [2024-12-13 12:28:38.596430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.999 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:11.257 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.517 [2024-12-13 12:28:39.067538] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.517 TLSTESTn1 00:23:11.517 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:11.775 Running I/O for 10 seconds... 00:23:13.649 5386.00 IOPS, 21.04 MiB/s [2024-12-13T11:28:42.286Z] 5490.00 IOPS, 21.45 MiB/s [2024-12-13T11:28:43.664Z] 5507.33 IOPS, 21.51 MiB/s [2024-12-13T11:28:44.599Z] 5539.75 IOPS, 21.64 MiB/s [2024-12-13T11:28:45.536Z] 5278.00 IOPS, 20.62 MiB/s [2024-12-13T11:28:46.471Z] 5315.33 IOPS, 20.76 MiB/s [2024-12-13T11:28:47.407Z] 5255.86 IOPS, 20.53 MiB/s [2024-12-13T11:28:48.343Z] 5299.75 IOPS, 20.70 MiB/s [2024-12-13T11:28:49.281Z] 5325.44 IOPS, 20.80 MiB/s [2024-12-13T11:28:49.540Z] 5352.80 IOPS, 20.91 MiB/s 00:23:21.840 Latency(us) 00:23:21.840 [2024-12-13T11:28:49.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.840 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:21.840 Verification LBA range: start 0x0 length 0x2000 00:23:21.840 TLSTESTn1 : 10.02 5351.96 20.91 0.00 0.00 23874.66 4805.97 29085.50 00:23:21.840 [2024-12-13T11:28:49.540Z] =================================================================================================================== 00:23:21.840 [2024-12-13T11:28:49.540Z] Total : 5351.96 20.91 0.00 0.00 23874.66 4805.97 29085.50 00:23:21.840 { 00:23:21.840 "results": [ 00:23:21.840 { 00:23:21.840 "job": "TLSTESTn1", 00:23:21.840 "core_mask": "0x4", 00:23:21.840 "workload": "verify", 00:23:21.840 "status": "finished", 00:23:21.840 "verify_range": { 00:23:21.840 "start": 0, 00:23:21.840 "length": 8192 00:23:21.840 }, 00:23:21.840 "queue_depth": 128, 00:23:21.840 "io_size": 4096, 00:23:21.840 "runtime": 10.024923, 00:23:21.840 "iops": 5351.961306834975, 00:23:21.840 "mibps": 20.906098854824123, 00:23:21.840 "io_failed": 0, 00:23:21.840 "io_timeout": 0, 00:23:21.840 "avg_latency_us": 23874.65655953202, 00:23:21.840 "min_latency_us": 4805.973333333333, 00:23:21.840 "max_latency_us": 29085.500952380953 00:23:21.840 } 00:23:21.840 ], 00:23:21.840 "core_count": 1 00:23:21.840 } 00:23:21.840 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 351822 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351822 ']' 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351822 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351822 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351822' 00:23:21.841 killing process with pid 351822 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351822 00:23:21.841 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.841 00:23:21.841 Latency(us) 00:23:21.841 [2024-12-13T11:28:49.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.841 [2024-12-13T11:28:49.541Z] =================================================================================================================== 00:23:21.841 [2024-12-13T11:28:49.541Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351822 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.n1r6DmqXmb 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1r6DmqXmb 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1r6DmqXmb 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1r6DmqXmb 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n1r6DmqXmb 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353607 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:21.841 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353607 /var/tmp/bdevperf.sock 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353607 ']' 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.100 [2024-12-13 12:28:49.584078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:22.100 [2024-12-13 12:28:49.584129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353607 ] 00:23:22.100 [2024-12-13 12:28:49.657649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.100 [2024-12-13 12:28:49.679042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.100 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:22.360 [2024-12-13 12:28:49.937000] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n1r6DmqXmb': 0100666 00:23:22.360 [2024-12-13 12:28:49.937033] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:22.360 request: 00:23:22.360 { 00:23:22.360 "name": "key0", 00:23:22.360 "path": "/tmp/tmp.n1r6DmqXmb", 00:23:22.360 "method": "keyring_file_add_key", 00:23:22.360 "req_id": 1 00:23:22.360 } 00:23:22.360 Got JSON-RPC error response 00:23:22.360 response: 00:23:22.360 { 00:23:22.360 "code": -1, 00:23:22.360 "message": "Operation not permitted" 00:23:22.360 } 00:23:22.360 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.619 [2024-12-13 12:28:50.141611] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.619 [2024-12-13 12:28:50.141645] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:22.619 request: 00:23:22.619 { 00:23:22.619 "name": "TLSTEST", 00:23:22.619 "trtype": "tcp", 00:23:22.619 "traddr": "10.0.0.2", 00:23:22.619 "adrfam": "ipv4", 00:23:22.619 "trsvcid": "4420", 00:23:22.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.619 "prchk_reftag": false, 00:23:22.619 "prchk_guard": false, 00:23:22.619 "hdgst": false, 00:23:22.619 "ddgst": false, 00:23:22.619 "psk": "key0", 00:23:22.619 "allow_unrecognized_csi": false, 00:23:22.619 "method": "bdev_nvme_attach_controller", 00:23:22.619 "req_id": 1 00:23:22.619 } 00:23:22.619 Got JSON-RPC error response 00:23:22.619 response: 00:23:22.619 { 00:23:22.619 "code": -126, 00:23:22.619 "message": "Required key not available" 00:23:22.619 } 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353607 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353607 ']' 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353607 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353607 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353607' 00:23:22.619 killing process with pid 353607 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353607 00:23:22.619 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.619 00:23:22.619 Latency(us) 00:23:22.619 [2024-12-13T11:28:50.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.619 [2024-12-13T11:28:50.319Z] =================================================================================================================== 00:23:22.619 [2024-12-13T11:28:50.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.619 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353607 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 351573 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351573 ']' 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351573 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351573 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351573' 00:23:22.878 killing process with pid 351573 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351573 00:23:22.878 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351573 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=353839 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 353839 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353839 ']' 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.138 [2024-12-13 12:28:50.641051] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:23.138 [2024-12-13 12:28:50.641096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.138 [2024-12-13 12:28:50.719591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.138 [2024-12-13 12:28:50.737666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.138 [2024-12-13 12:28:50.737698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.138 [2024-12-13 12:28:50.737704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.138 [2024-12-13 12:28:50.737710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.138 [2024-12-13 12:28:50.737715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.138 [2024-12-13 12:28:50.738205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.138 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.n1r6DmqXmb 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.n1r6DmqXmb 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.n1r6DmqXmb 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n1r6DmqXmb 00:23:23.398 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.398 [2024-12-13 12:28:51.048552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.398 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.656 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:23.915 [2024-12-13 12:28:51.413500] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.915 [2024-12-13 12:28:51.413692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.915 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:23.915 malloc0 00:23:23.915 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.174 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:24.433 [2024-12-13 12:28:51.970777] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n1r6DmqXmb': 0100666 00:23:24.433 [2024-12-13 12:28:51.970810] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:24.433 request: 00:23:24.433 { 00:23:24.433 "name": "key0", 00:23:24.433 "path": "/tmp/tmp.n1r6DmqXmb", 00:23:24.433 "method": "keyring_file_add_key", 00:23:24.433 "req_id": 1 00:23:24.433 } 00:23:24.433 Got JSON-RPC error response 00:23:24.433 response: 00:23:24.433 { 00:23:24.433 "code": -1, 00:23:24.433 "message": "Operation not permitted" 00:23:24.433 } 00:23:24.433 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.692 [2024-12-13 12:28:52.151267] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:24.692 [2024-12-13 12:28:52.151304] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:24.692 request: 00:23:24.692 { 00:23:24.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.692 "host": "nqn.2016-06.io.spdk:host1", 00:23:24.692 "psk": "key0", 00:23:24.692 "method": "nvmf_subsystem_add_host", 00:23:24.692 "req_id": 1 00:23:24.692 } 00:23:24.692 Got JSON-RPC error response 00:23:24.692 response: 00:23:24.692 { 00:23:24.692 "code": -32603, 00:23:24.692 "message": "Internal error" 00:23:24.692 } 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 353839 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353839 ']' 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353839 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353839 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353839' 00:23:24.692 killing process with pid 353839 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353839 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353839 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.n1r6DmqXmb 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.692 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354104 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354104 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354104 ']' 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.951 [2024-12-13 12:28:52.444823] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:24.951 [2024-12-13 12:28:52.444869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.951 [2024-12-13 12:28:52.520451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.951 [2024-12-13 12:28:52.541285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.951 [2024-12-13 12:28:52.541320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.951 [2024-12-13 12:28:52.541327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.951 [2024-12-13 12:28:52.541333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.951 [2024-12-13 12:28:52.541338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.951 [2024-12-13 12:28:52.541813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.951 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.952 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.211 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.211 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.n1r6DmqXmb 00:23:25.211 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n1r6DmqXmb 00:23:25.211 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.211 [2024-12-13 12:28:52.836798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.211 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.470 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.729 [2024-12-13 12:28:53.237840] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.729 [2024-12-13 12:28:53.238016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.729 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.988 malloc0 00:23:25.988 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.988 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:26.247 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=354351 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 354351 /var/tmp/bdevperf.sock 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354351 ']' 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.506 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.506 [2024-12-13 12:28:54.072174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:26.506 [2024-12-13 12:28:54.072219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354351 ] 00:23:26.506 [2024-12-13 12:28:54.146296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.506 [2024-12-13 12:28:54.168441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.766 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.766 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.766 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:27.025 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.025 [2024-12-13 12:28:54.639935] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.025 TLSTESTn1 00:23:27.283 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:27.543 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:27.543 "subsystems": [ 00:23:27.543 { 00:23:27.543 "subsystem": "keyring", 00:23:27.543 "config": [ 00:23:27.543 { 00:23:27.543 "method": "keyring_file_add_key", 00:23:27.543 "params": { 00:23:27.543 "name": "key0", 00:23:27.543 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:27.543 } 00:23:27.543 } 00:23:27.543 ] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "iobuf", 00:23:27.543 "config": [ 00:23:27.543 { 00:23:27.543 "method": "iobuf_set_options", 00:23:27.543 "params": { 00:23:27.543 "small_pool_count": 8192, 00:23:27.543 "large_pool_count": 1024, 00:23:27.543 "small_bufsize": 8192, 00:23:27.543 "large_bufsize": 135168, 00:23:27.543 "enable_numa": false 00:23:27.543 } 00:23:27.543 } 00:23:27.543 ] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "sock", 00:23:27.543 "config": [ 00:23:27.543 { 00:23:27.543 "method": "sock_set_default_impl", 00:23:27.543 "params": { 00:23:27.543 "impl_name": "posix" 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "sock_impl_set_options", 00:23:27.543 "params": { 00:23:27.543 "impl_name": "ssl", 00:23:27.543 "recv_buf_size": 4096, 00:23:27.543 "send_buf_size": 4096, 00:23:27.543 "enable_recv_pipe": true, 00:23:27.543 "enable_quickack": false, 00:23:27.543 "enable_placement_id": 0, 00:23:27.543 "enable_zerocopy_send_server": true, 00:23:27.543 "enable_zerocopy_send_client": false, 00:23:27.543 "zerocopy_threshold": 0, 00:23:27.543 "tls_version": 0, 00:23:27.543 "enable_ktls": false 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "sock_impl_set_options", 00:23:27.543 "params": { 00:23:27.543 "impl_name": "posix", 00:23:27.543 "recv_buf_size": 2097152, 00:23:27.543 "send_buf_size": 2097152, 00:23:27.543 "enable_recv_pipe": true, 00:23:27.543 "enable_quickack": false, 00:23:27.543 "enable_placement_id": 0, 00:23:27.543 "enable_zerocopy_send_server": true, 00:23:27.543 "enable_zerocopy_send_client": false, 00:23:27.543 "zerocopy_threshold": 0, 00:23:27.543 "tls_version": 0, 00:23:27.543 "enable_ktls": false 00:23:27.543 } 00:23:27.543 } 00:23:27.543 ] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "vmd", 00:23:27.543 "config": [] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "accel", 00:23:27.543 "config": [ 00:23:27.543 { 00:23:27.543 "method": "accel_set_options", 00:23:27.543 "params": { 00:23:27.543 "small_cache_size": 128, 00:23:27.543 "large_cache_size": 16, 00:23:27.543 "task_count": 2048, 00:23:27.543 "sequence_count": 2048, 00:23:27.543 "buf_count": 2048 00:23:27.543 } 00:23:27.543 } 00:23:27.543 ] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "bdev", 00:23:27.543 "config": [ 00:23:27.543 { 00:23:27.543 "method": "bdev_set_options", 00:23:27.543 "params": { 00:23:27.543 "bdev_io_pool_size": 65535, 00:23:27.543 "bdev_io_cache_size": 256, 00:23:27.543 "bdev_auto_examine": true, 00:23:27.543 "iobuf_small_cache_size": 128, 00:23:27.543 "iobuf_large_cache_size": 16 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "bdev_raid_set_options", 00:23:27.543 "params": { 00:23:27.543 "process_window_size_kb": 1024, 00:23:27.543 "process_max_bandwidth_mb_sec": 0 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "bdev_iscsi_set_options", 00:23:27.543 "params": { 00:23:27.543 "timeout_sec": 30 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "bdev_nvme_set_options", 00:23:27.543 "params": { 00:23:27.543 "action_on_timeout": "none", 00:23:27.543 "timeout_us": 0, 00:23:27.543 "timeout_admin_us": 0, 00:23:27.543 "keep_alive_timeout_ms": 10000, 00:23:27.543 "arbitration_burst": 0, 00:23:27.543 "low_priority_weight": 0, 00:23:27.543 "medium_priority_weight": 0, 00:23:27.543 "high_priority_weight": 0, 00:23:27.543 "nvme_adminq_poll_period_us": 10000, 00:23:27.543 "nvme_ioq_poll_period_us": 0, 00:23:27.543 "io_queue_requests": 0, 00:23:27.543 "delay_cmd_submit": true, 00:23:27.543 "transport_retry_count": 4, 00:23:27.543 "bdev_retry_count": 3, 00:23:27.543 "transport_ack_timeout": 0, 00:23:27.543 "ctrlr_loss_timeout_sec": 0, 00:23:27.543 "reconnect_delay_sec": 0, 00:23:27.543 "fast_io_fail_timeout_sec": 0, 00:23:27.543 "disable_auto_failback": false, 00:23:27.543 "generate_uuids": false, 00:23:27.543 "transport_tos": 0, 00:23:27.543 "nvme_error_stat": false, 00:23:27.543 "rdma_srq_size": 0, 00:23:27.543 "io_path_stat": false, 00:23:27.543 "allow_accel_sequence": false, 00:23:27.543 "rdma_max_cq_size": 0, 00:23:27.543 "rdma_cm_event_timeout_ms": 0, 00:23:27.543 "dhchap_digests": [ 00:23:27.543 "sha256", 00:23:27.543 "sha384", 00:23:27.543 "sha512" 00:23:27.543 ], 00:23:27.543 "dhchap_dhgroups": [ 00:23:27.543 "null", 00:23:27.543 "ffdhe2048", 00:23:27.543 "ffdhe3072", 00:23:27.543 "ffdhe4096", 00:23:27.543 "ffdhe6144", 00:23:27.543 "ffdhe8192" 00:23:27.543 ], 00:23:27.543 "rdma_umr_per_io": false 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "bdev_nvme_set_hotplug", 00:23:27.543 "params": { 00:23:27.543 "period_us": 100000, 00:23:27.543 "enable": false 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "bdev_malloc_create", 00:23:27.543 "params": { 00:23:27.543 "name": "malloc0", 00:23:27.543 "num_blocks": 8192, 00:23:27.543 "block_size": 4096, 00:23:27.543 "physical_block_size": 4096, 00:23:27.543 "uuid": "06873be6-9c5d-42e6-acf9-05bcf95d786c", 00:23:27.543 "optimal_io_boundary": 0, 00:23:27.543 "md_size": 0, 00:23:27.543 "dif_type": 0, 00:23:27.543 "dif_is_head_of_md": false, 00:23:27.543 "dif_pi_format": 0 00:23:27.543 } 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "method": "bdev_wait_for_examine" 00:23:27.543 } 00:23:27.543 ] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "nbd", 00:23:27.543 "config": [] 00:23:27.543 }, 00:23:27.543 { 00:23:27.543 "subsystem": "scheduler", 00:23:27.543 "config": [ 00:23:27.543 { 00:23:27.543 "method": "framework_set_scheduler", 00:23:27.543 "params": { 00:23:27.544 "name": "static" 00:23:27.544 } 00:23:27.544 } 00:23:27.544 ] 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "subsystem": "nvmf", 00:23:27.544 "config": [ 00:23:27.544 { 00:23:27.544 "method": "nvmf_set_config", 00:23:27.544 "params": { 00:23:27.544 "discovery_filter": "match_any", 00:23:27.544 "admin_cmd_passthru": { 00:23:27.544 "identify_ctrlr": false 00:23:27.544 }, 00:23:27.544 "dhchap_digests": [ 00:23:27.544 "sha256", 00:23:27.544 "sha384", 00:23:27.544 "sha512" 00:23:27.544 ], 00:23:27.544 "dhchap_dhgroups": [ 00:23:27.544 "null", 00:23:27.544 "ffdhe2048", 00:23:27.544 "ffdhe3072", 00:23:27.544 "ffdhe4096", 00:23:27.544 "ffdhe6144", 00:23:27.544 "ffdhe8192" 00:23:27.544 ] 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_set_max_subsystems", 00:23:27.544 "params": { 00:23:27.544 "max_subsystems": 1024 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_set_crdt", 00:23:27.544 "params": { 00:23:27.544 "crdt1": 0, 00:23:27.544 "crdt2": 0, 00:23:27.544 "crdt3": 0 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_create_transport", 00:23:27.544 "params": { 00:23:27.544 "trtype": "TCP", 00:23:27.544 "max_queue_depth": 128, 00:23:27.544 "max_io_qpairs_per_ctrlr": 127, 00:23:27.544 "in_capsule_data_size": 4096, 00:23:27.544 "max_io_size": 131072, 00:23:27.544 "io_unit_size": 131072, 00:23:27.544 "max_aq_depth": 128, 00:23:27.544 "num_shared_buffers": 511, 00:23:27.544 "buf_cache_size": 4294967295, 00:23:27.544 "dif_insert_or_strip": false, 00:23:27.544 "zcopy": false, 00:23:27.544 "c2h_success": false, 00:23:27.544 "sock_priority": 0, 00:23:27.544 "abort_timeout_sec": 1, 00:23:27.544 "ack_timeout": 0, 00:23:27.544 "data_wr_pool_size": 0 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_create_subsystem", 00:23:27.544 "params": { 00:23:27.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.544 "allow_any_host": false, 00:23:27.544 "serial_number": "SPDK00000000000001", 00:23:27.544 "model_number": "SPDK bdev Controller", 00:23:27.544 "max_namespaces": 10, 00:23:27.544 "min_cntlid": 1, 00:23:27.544 "max_cntlid": 65519, 00:23:27.544 "ana_reporting": false 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_subsystem_add_host", 00:23:27.544 "params": { 00:23:27.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.544 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.544 "psk": "key0" 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_subsystem_add_ns", 00:23:27.544 "params": { 00:23:27.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.544 "namespace": { 00:23:27.544 "nsid": 1, 00:23:27.544 "bdev_name": "malloc0", 00:23:27.544 "nguid": "06873BE69C5D42E6ACF905BCF95D786C", 00:23:27.544 "uuid": "06873be6-9c5d-42e6-acf9-05bcf95d786c", 00:23:27.544 "no_auto_visible": false 00:23:27.544 } 00:23:27.544 } 00:23:27.544 }, 00:23:27.544 { 00:23:27.544 "method": "nvmf_subsystem_add_listener", 00:23:27.544 "params": { 00:23:27.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.544 "listen_address": { 00:23:27.544 "trtype": "TCP", 00:23:27.544 "adrfam": "IPv4", 00:23:27.544 "traddr": "10.0.0.2", 00:23:27.544 "trsvcid": "4420" 00:23:27.544 }, 00:23:27.544 "secure_channel": true 00:23:27.544 } 00:23:27.544 } 00:23:27.544 ] 00:23:27.544 } 00:23:27.544 ] 00:23:27.544 }' 00:23:27.544 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:27.803 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:27.803 "subsystems": [ 00:23:27.803 { 00:23:27.803 "subsystem": "keyring", 00:23:27.803 "config": [ 00:23:27.803 { 00:23:27.803 "method": "keyring_file_add_key", 00:23:27.803 "params": { 00:23:27.803 "name": "key0", 00:23:27.803 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:27.803 } 00:23:27.803 } 00:23:27.803 ] 00:23:27.803 }, 00:23:27.803 { 00:23:27.803 "subsystem": "iobuf", 00:23:27.803 "config": [ 00:23:27.803 { 00:23:27.803 "method": "iobuf_set_options", 00:23:27.803 "params": { 00:23:27.803 "small_pool_count": 8192, 00:23:27.803 "large_pool_count": 1024, 00:23:27.803 "small_bufsize": 8192, 00:23:27.803 "large_bufsize": 135168, 00:23:27.803 "enable_numa": false 00:23:27.803 } 00:23:27.803 } 00:23:27.803 ] 00:23:27.803 }, 00:23:27.803 { 00:23:27.803 "subsystem": "sock", 00:23:27.803 "config": [ 00:23:27.803 { 00:23:27.803 "method": "sock_set_default_impl", 00:23:27.803 "params": { 00:23:27.803 "impl_name": "posix" 00:23:27.803 } 00:23:27.803 }, 00:23:27.803 { 00:23:27.804 "method": "sock_impl_set_options", 00:23:27.804 "params": { 00:23:27.804 "impl_name": "ssl", 00:23:27.804 "recv_buf_size": 4096, 00:23:27.804 "send_buf_size": 4096, 00:23:27.804 "enable_recv_pipe": true, 00:23:27.804 "enable_quickack": false, 00:23:27.804 "enable_placement_id": 0, 00:23:27.804 "enable_zerocopy_send_server": true, 00:23:27.804 "enable_zerocopy_send_client": false, 00:23:27.804 "zerocopy_threshold": 0, 00:23:27.804 "tls_version": 0, 00:23:27.804 "enable_ktls": false 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "sock_impl_set_options", 00:23:27.804 "params": { 00:23:27.804 "impl_name": "posix", 00:23:27.804 "recv_buf_size": 2097152, 00:23:27.804 "send_buf_size": 2097152, 00:23:27.804 "enable_recv_pipe": true, 00:23:27.804 "enable_quickack": false, 00:23:27.804 "enable_placement_id": 0, 00:23:27.804 "enable_zerocopy_send_server": true, 00:23:27.804 "enable_zerocopy_send_client": false, 00:23:27.804 "zerocopy_threshold": 0, 00:23:27.804 "tls_version": 0, 00:23:27.804 "enable_ktls": false 00:23:27.804 } 00:23:27.804 } 00:23:27.804 ] 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "subsystem": "vmd", 00:23:27.804 "config": [] 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "subsystem": "accel", 00:23:27.804 "config": [ 00:23:27.804 { 00:23:27.804 "method": "accel_set_options", 00:23:27.804 "params": { 00:23:27.804 "small_cache_size": 128, 00:23:27.804 "large_cache_size": 16, 00:23:27.804 "task_count": 2048, 00:23:27.804 "sequence_count": 2048, 00:23:27.804 "buf_count": 2048 00:23:27.804 } 00:23:27.804 } 00:23:27.804 ] 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "subsystem": "bdev", 00:23:27.804 "config": [ 00:23:27.804 { 00:23:27.804 "method": "bdev_set_options", 00:23:27.804 "params": { 00:23:27.804 "bdev_io_pool_size": 65535, 00:23:27.804 "bdev_io_cache_size": 256, 00:23:27.804 "bdev_auto_examine": true, 00:23:27.804 "iobuf_small_cache_size": 128, 00:23:27.804 "iobuf_large_cache_size": 16 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "bdev_raid_set_options", 00:23:27.804 "params": { 00:23:27.804 "process_window_size_kb": 1024, 00:23:27.804 "process_max_bandwidth_mb_sec": 0 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "bdev_iscsi_set_options", 00:23:27.804 "params": { 00:23:27.804 "timeout_sec": 30 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "bdev_nvme_set_options", 00:23:27.804 "params": { 00:23:27.804 "action_on_timeout": "none", 00:23:27.804 "timeout_us": 0, 00:23:27.804 "timeout_admin_us": 0, 00:23:27.804 "keep_alive_timeout_ms": 10000, 00:23:27.804 "arbitration_burst": 0, 00:23:27.804 "low_priority_weight": 0, 00:23:27.804 "medium_priority_weight": 0, 00:23:27.804 "high_priority_weight": 0, 00:23:27.804 "nvme_adminq_poll_period_us": 10000, 00:23:27.804 "nvme_ioq_poll_period_us": 0, 00:23:27.804 "io_queue_requests": 512, 00:23:27.804 "delay_cmd_submit": true, 00:23:27.804 "transport_retry_count": 4, 00:23:27.804 "bdev_retry_count": 3, 00:23:27.804 "transport_ack_timeout": 0, 00:23:27.804 "ctrlr_loss_timeout_sec": 0, 00:23:27.804 "reconnect_delay_sec": 0, 00:23:27.804 "fast_io_fail_timeout_sec": 0, 00:23:27.804 "disable_auto_failback": false, 00:23:27.804 "generate_uuids": false, 00:23:27.804 "transport_tos": 0, 00:23:27.804 "nvme_error_stat": false, 00:23:27.804 "rdma_srq_size": 0, 00:23:27.804 "io_path_stat": false, 00:23:27.804 "allow_accel_sequence": false, 00:23:27.804 "rdma_max_cq_size": 0, 00:23:27.804 "rdma_cm_event_timeout_ms": 0, 00:23:27.804 "dhchap_digests": [ 00:23:27.804 "sha256", 00:23:27.804 "sha384", 00:23:27.804 "sha512" 00:23:27.804 ], 00:23:27.804 "dhchap_dhgroups": [ 00:23:27.804 "null", 00:23:27.804 "ffdhe2048", 00:23:27.804 "ffdhe3072", 00:23:27.804 "ffdhe4096", 00:23:27.804 "ffdhe6144", 00:23:27.804 "ffdhe8192" 00:23:27.804 ], 00:23:27.804 "rdma_umr_per_io": false 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "bdev_nvme_attach_controller", 00:23:27.804 "params": { 00:23:27.804 "name": "TLSTEST", 00:23:27.804 "trtype": "TCP", 00:23:27.804 "adrfam": "IPv4", 00:23:27.804 "traddr": "10.0.0.2", 00:23:27.804 "trsvcid": "4420", 00:23:27.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.804 "prchk_reftag": false, 00:23:27.804 "prchk_guard": false, 00:23:27.804 "ctrlr_loss_timeout_sec": 0, 00:23:27.804 "reconnect_delay_sec": 0, 00:23:27.804 "fast_io_fail_timeout_sec": 0, 00:23:27.804 "psk": "key0", 00:23:27.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.804 "hdgst": false, 00:23:27.804 "ddgst": false, 00:23:27.804 "multipath": "multipath" 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "bdev_nvme_set_hotplug", 00:23:27.804 "params": { 00:23:27.804 "period_us": 100000, 00:23:27.804 "enable": false 00:23:27.804 } 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "method": "bdev_wait_for_examine" 00:23:27.804 } 00:23:27.804 ] 00:23:27.804 }, 00:23:27.804 { 00:23:27.804 "subsystem": "nbd", 00:23:27.804 "config": [] 00:23:27.804 } 00:23:27.804 ] 00:23:27.804 }' 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 354351 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354351 ']' 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354351 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354351 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354351' 00:23:27.804 killing process with pid 354351 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354351 00:23:27.804 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.804 00:23:27.804 Latency(us) 00:23:27.804 [2024-12-13T11:28:55.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.804 [2024-12-13T11:28:55.504Z] =================================================================================================================== 00:23:27.804 [2024-12-13T11:28:55.504Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354351 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 354104 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354104 ']' 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354104 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.804 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354104 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354104' 00:23:28.064 killing process with pid 354104 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354104 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354104 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.064 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:28.064 "subsystems": [ 00:23:28.064 { 00:23:28.064 "subsystem": "keyring", 00:23:28.064 "config": [ 00:23:28.064 { 00:23:28.064 "method": "keyring_file_add_key", 00:23:28.064 "params": { 00:23:28.064 "name": "key0", 00:23:28.064 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:28.064 } 00:23:28.064 } 00:23:28.064 ] 00:23:28.064 }, 00:23:28.064 { 00:23:28.064 "subsystem": "iobuf", 00:23:28.064 "config": [ 00:23:28.064 { 00:23:28.064 "method": "iobuf_set_options", 00:23:28.064 "params": { 00:23:28.064 "small_pool_count": 8192, 00:23:28.064 "large_pool_count": 1024, 00:23:28.064 "small_bufsize": 8192, 00:23:28.064 "large_bufsize": 135168, 00:23:28.064 "enable_numa": false 00:23:28.064 } 00:23:28.064 } 00:23:28.064 ] 00:23:28.064 }, 00:23:28.064 { 00:23:28.064 "subsystem": "sock", 00:23:28.064 "config": [ 00:23:28.064 { 00:23:28.064 "method": "sock_set_default_impl", 00:23:28.064 "params": { 00:23:28.064 "impl_name": "posix" 00:23:28.064 } 00:23:28.064 }, 00:23:28.064 { 00:23:28.064 "method": "sock_impl_set_options", 00:23:28.064 "params": { 00:23:28.064 "impl_name": "ssl", 00:23:28.064 "recv_buf_size": 4096, 00:23:28.064 "send_buf_size": 4096, 00:23:28.064 "enable_recv_pipe": true, 00:23:28.064 "enable_quickack": false, 00:23:28.064 "enable_placement_id": 0, 00:23:28.064 "enable_zerocopy_send_server": true, 00:23:28.064 "enable_zerocopy_send_client": false, 00:23:28.064 "zerocopy_threshold": 0, 00:23:28.064 "tls_version": 0, 00:23:28.064 "enable_ktls": false 00:23:28.064 } 00:23:28.064 }, 00:23:28.064 { 00:23:28.064 "method": "sock_impl_set_options", 00:23:28.064 "params": { 00:23:28.064 "impl_name": "posix", 00:23:28.064 "recv_buf_size": 2097152, 00:23:28.064 "send_buf_size": 2097152, 00:23:28.064 "enable_recv_pipe": true, 00:23:28.064 "enable_quickack": false, 00:23:28.064 "enable_placement_id": 0, 00:23:28.064 "enable_zerocopy_send_server": true, 00:23:28.064 "enable_zerocopy_send_client": false, 00:23:28.064 "zerocopy_threshold": 0, 00:23:28.064 "tls_version": 0, 00:23:28.064 "enable_ktls": false 00:23:28.064 } 00:23:28.064 } 00:23:28.064 ] 00:23:28.064 }, 00:23:28.064 { 00:23:28.064 "subsystem": "vmd", 00:23:28.064 "config": [] 00:23:28.064 }, 00:23:28.064 { 00:23:28.064 "subsystem": "accel", 00:23:28.064 "config": [ 00:23:28.064 { 00:23:28.064 "method": "accel_set_options", 00:23:28.064 "params": { 00:23:28.064 "small_cache_size": 128, 00:23:28.064 "large_cache_size": 16, 00:23:28.065 "task_count": 2048, 00:23:28.065 "sequence_count": 2048, 00:23:28.065 "buf_count": 2048 00:23:28.065 } 00:23:28.065 } 00:23:28.065 ] 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "subsystem": "bdev", 00:23:28.065 "config": [ 00:23:28.065 { 00:23:28.065 "method": "bdev_set_options", 00:23:28.065 "params": { 00:23:28.065 "bdev_io_pool_size": 65535, 00:23:28.065 "bdev_io_cache_size": 256, 00:23:28.065 "bdev_auto_examine": true, 00:23:28.065 "iobuf_small_cache_size": 128, 00:23:28.065 "iobuf_large_cache_size": 16 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "bdev_raid_set_options", 00:23:28.065 "params": { 00:23:28.065 "process_window_size_kb": 1024, 00:23:28.065 "process_max_bandwidth_mb_sec": 0 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "bdev_iscsi_set_options", 00:23:28.065 "params": { 00:23:28.065 "timeout_sec": 30 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "bdev_nvme_set_options", 00:23:28.065 "params": { 00:23:28.065 "action_on_timeout": "none", 00:23:28.065 "timeout_us": 0, 00:23:28.065 "timeout_admin_us": 0, 00:23:28.065 "keep_alive_timeout_ms": 10000, 00:23:28.065 "arbitration_burst": 0, 00:23:28.065 "low_priority_weight": 0, 00:23:28.065 "medium_priority_weight": 0, 00:23:28.065 "high_priority_weight": 0, 00:23:28.065 "nvme_adminq_poll_period_us": 10000, 00:23:28.065 "nvme_ioq_poll_period_us": 0, 00:23:28.065 "io_queue_requests": 0, 00:23:28.065 "delay_cmd_submit": true, 00:23:28.065 "transport_retry_count": 4, 00:23:28.065 "bdev_retry_count": 3, 00:23:28.065 "transport_ack_timeout": 0, 00:23:28.065 "ctrlr_loss_timeout_sec": 0, 00:23:28.065 "reconnect_delay_sec": 0, 00:23:28.065 "fast_io_fail_timeout_sec": 0, 00:23:28.065 "disable_auto_failback": false, 00:23:28.065 "generate_uuids": false, 00:23:28.065 "transport_tos": 0, 00:23:28.065 "nvme_error_stat": false, 00:23:28.065 "rdma_srq_size": 0, 00:23:28.065 "io_path_stat": false, 00:23:28.065 "allow_accel_sequence": false, 00:23:28.065 "rdma_max_cq_size": 0, 00:23:28.065 "rdma_cm_event_timeout_ms": 0, 00:23:28.065 "dhchap_digests": [ 00:23:28.065 "sha256", 00:23:28.065 "sha384", 00:23:28.065 "sha512" 00:23:28.065 ], 00:23:28.065 "dhchap_dhgroups": [ 00:23:28.065 "null", 00:23:28.065 "ffdhe2048", 00:23:28.065 "ffdhe3072", 00:23:28.065 "ffdhe4096", 00:23:28.065 "ffdhe6144", 00:23:28.065 "ffdhe8192" 00:23:28.065 ], 00:23:28.065 "rdma_umr_per_io": false 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "bdev_nvme_set_hotplug", 00:23:28.065 "params": { 00:23:28.065 "period_us": 100000, 00:23:28.065 "enable": false 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "bdev_malloc_create", 00:23:28.065 "params": { 00:23:28.065 "name": "malloc0", 00:23:28.065 "num_blocks": 8192, 00:23:28.065 "block_size": 4096, 00:23:28.065 "physical_block_size": 4096, 00:23:28.065 "uuid": "06873be6-9c5d-42e6-acf9-05bcf95d786c", 00:23:28.065 "optimal_io_boundary": 0, 00:23:28.065 "md_size": 0, 00:23:28.065 "dif_type": 0, 00:23:28.065 "dif_is_head_of_md": false, 00:23:28.065 "dif_pi_format": 0 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "bdev_wait_for_examine" 00:23:28.065 } 00:23:28.065 ] 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "subsystem": "nbd", 00:23:28.065 "config": [] 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "subsystem": "scheduler", 00:23:28.065 "config": [ 00:23:28.065 { 00:23:28.065 "method": "framework_set_scheduler", 00:23:28.065 "params": { 00:23:28.065 "name": "static" 00:23:28.065 } 00:23:28.065 } 00:23:28.065 ] 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "subsystem": "nvmf", 00:23:28.065 "config": [ 00:23:28.065 { 00:23:28.065 "method": "nvmf_set_config", 00:23:28.065 "params": { 00:23:28.065 "discovery_filter": "match_any", 00:23:28.065 "admin_cmd_passthru": { 00:23:28.065 "identify_ctrlr": false 00:23:28.065 }, 00:23:28.065 "dhchap_digests": [ 00:23:28.065 "sha256", 00:23:28.065 "sha384", 00:23:28.065 "sha512" 00:23:28.065 ], 00:23:28.065 "dhchap_dhgroups": [ 00:23:28.065 "null", 00:23:28.065 "ffdhe2048", 00:23:28.065 "ffdhe3072", 00:23:28.065 "ffdhe4096", 00:23:28.065 "ffdhe6144", 00:23:28.065 "ffdhe8192" 00:23:28.065 ] 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_set_max_subsystems", 00:23:28.065 "params": { 00:23:28.065 "max_subsystems": 1024 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_set_crdt", 00:23:28.065 "params": { 00:23:28.065 "crdt1": 0, 00:23:28.065 "crdt2": 0, 00:23:28.065 "crdt3": 0 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_create_transport", 00:23:28.065 "params": { 00:23:28.065 "trtype": "TCP", 00:23:28.065 "max_queue_depth": 128, 00:23:28.065 "max_io_qpairs_per_ctrlr": 127, 00:23:28.065 "in_capsule_data_size": 4096, 00:23:28.065 "max_io_size": 131072, 00:23:28.065 "io_unit_size": 131072, 00:23:28.065 "max_aq_depth": 128, 00:23:28.065 "num_shared_buffers": 511, 00:23:28.065 "buf_cache_size": 4294967295, 00:23:28.065 "dif_insert_or_strip": false, 00:23:28.065 "zcopy": false, 00:23:28.065 "c2h_success": false, 00:23:28.065 "sock_priority": 0, 00:23:28.065 "abort_timeout_sec": 1, 00:23:28.065 "ack_timeout": 0, 00:23:28.065 "data_wr_pool_size": 0 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_create_subsystem", 00:23:28.065 "params": { 00:23:28.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.065 "allow_any_host": false, 00:23:28.065 "serial_number": "SPDK00000000000001", 00:23:28.065 "model_number": "SPDK bdev Controller", 00:23:28.065 "max_namespaces": 10, 00:23:28.065 "min_cntlid": 1, 00:23:28.065 "max_cntlid": 65519, 00:23:28.065 "ana_reporting": false 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_subsystem_add_host", 00:23:28.065 "params": { 00:23:28.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.065 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.065 "psk": "key0" 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_subsystem_add_ns", 00:23:28.065 "params": { 00:23:28.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.065 "namespace": { 00:23:28.065 "nsid": 1, 00:23:28.065 "bdev_name": "malloc0", 00:23:28.065 "nguid": "06873BE69C5D42E6ACF905BCF95D786C", 00:23:28.065 "uuid": "06873be6-9c5d-42e6-acf9-05bcf95d786c", 00:23:28.065 "no_auto_visible": false 00:23:28.065 } 00:23:28.065 } 00:23:28.065 }, 00:23:28.065 { 00:23:28.065 "method": "nvmf_subsystem_add_listener", 00:23:28.065 "params": { 00:23:28.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.065 "listen_address": { 00:23:28.065 "trtype": "TCP", 00:23:28.065 "adrfam": "IPv4", 00:23:28.065 "traddr": "10.0.0.2", 00:23:28.065 "trsvcid": "4420" 00:23:28.065 }, 00:23:28.065 "secure_channel": true 00:23:28.065 } 00:23:28.065 } 00:23:28.065 ] 00:23:28.065 } 00:23:28.065 ] 00:23:28.065 }' 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354706 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354706 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354706 ']' 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.065 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.066 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.066 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.066 [2024-12-13 12:28:55.751436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:28.066 [2024-12-13 12:28:55.751483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.325 [2024-12-13 12:28:55.826941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.325 [2024-12-13 12:28:55.847917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.325 [2024-12-13 12:28:55.847953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.325 [2024-12-13 12:28:55.847965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.325 [2024-12-13 12:28:55.847971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.325 [2024-12-13 12:28:55.847976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.325 [2024-12-13 12:28:55.848499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.584 [2024-12-13 12:28:56.056038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.584 [2024-12-13 12:28:56.088065] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.584 [2024-12-13 12:28:56.088254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=354837 00:23:29.152 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 354837 /var/tmp/bdevperf.sock 00:23:29.153 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354837 ']' 00:23:29.153 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.153 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:29.153 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.153 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.153 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:29.153 "subsystems": [ 00:23:29.153 { 00:23:29.153 "subsystem": "keyring", 00:23:29.153 "config": [ 00:23:29.153 { 00:23:29.153 "method": "keyring_file_add_key", 00:23:29.153 "params": { 00:23:29.153 "name": "key0", 00:23:29.153 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:29.153 } 00:23:29.153 } 00:23:29.153 ] 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "subsystem": "iobuf", 00:23:29.153 "config": [ 00:23:29.153 { 00:23:29.153 "method": "iobuf_set_options", 00:23:29.153 "params": { 00:23:29.153 "small_pool_count": 8192, 00:23:29.153 "large_pool_count": 1024, 00:23:29.153 "small_bufsize": 8192, 00:23:29.153 "large_bufsize": 135168, 00:23:29.153 "enable_numa": false 00:23:29.153 } 00:23:29.153 } 00:23:29.153 ] 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "subsystem": "sock", 00:23:29.153 "config": [ 00:23:29.153 { 00:23:29.153 "method": "sock_set_default_impl", 00:23:29.153 "params": { 00:23:29.153 "impl_name": "posix" 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "sock_impl_set_options", 00:23:29.153 "params": { 00:23:29.153 "impl_name": "ssl", 00:23:29.153 "recv_buf_size": 4096, 00:23:29.153 "send_buf_size": 4096, 00:23:29.153 "enable_recv_pipe": true, 00:23:29.153 "enable_quickack": false, 00:23:29.153 "enable_placement_id": 0, 00:23:29.153 "enable_zerocopy_send_server": true, 00:23:29.153 "enable_zerocopy_send_client": false, 00:23:29.153 "zerocopy_threshold": 0, 00:23:29.153 "tls_version": 0, 00:23:29.153 "enable_ktls": false 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "sock_impl_set_options", 00:23:29.153 "params": { 00:23:29.153 "impl_name": "posix", 00:23:29.153 "recv_buf_size": 2097152, 00:23:29.153 "send_buf_size": 2097152, 00:23:29.153 "enable_recv_pipe": true, 00:23:29.153 "enable_quickack": false, 00:23:29.153 "enable_placement_id": 0, 00:23:29.153 "enable_zerocopy_send_server": true, 00:23:29.153 "enable_zerocopy_send_client": false, 00:23:29.153 "zerocopy_threshold": 0, 00:23:29.153 "tls_version": 0, 00:23:29.153 "enable_ktls": false 00:23:29.153 } 00:23:29.153 } 00:23:29.153 ] 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "subsystem": "vmd", 00:23:29.153 "config": [] 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "subsystem": "accel", 00:23:29.153 "config": [ 00:23:29.153 { 00:23:29.153 "method": "accel_set_options", 00:23:29.153 "params": { 00:23:29.153 "small_cache_size": 128, 00:23:29.153 "large_cache_size": 16, 00:23:29.153 "task_count": 2048, 00:23:29.153 "sequence_count": 2048, 00:23:29.153 "buf_count": 2048 00:23:29.153 } 00:23:29.153 } 00:23:29.153 ] 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "subsystem": "bdev", 00:23:29.153 "config": [ 00:23:29.153 { 00:23:29.153 "method": "bdev_set_options", 00:23:29.153 "params": { 00:23:29.153 "bdev_io_pool_size": 65535, 00:23:29.153 "bdev_io_cache_size": 256, 00:23:29.153 "bdev_auto_examine": true, 00:23:29.153 "iobuf_small_cache_size": 128, 00:23:29.153 "iobuf_large_cache_size": 16 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "bdev_raid_set_options", 00:23:29.153 "params": { 00:23:29.153 "process_window_size_kb": 1024, 00:23:29.153 "process_max_bandwidth_mb_sec": 0 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "bdev_iscsi_set_options", 00:23:29.153 "params": { 00:23:29.153 "timeout_sec": 30 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "bdev_nvme_set_options", 00:23:29.153 "params": { 00:23:29.153 "action_on_timeout": "none", 00:23:29.153 "timeout_us": 0, 00:23:29.153 "timeout_admin_us": 0, 00:23:29.153 "keep_alive_timeout_ms": 10000, 00:23:29.153 "arbitration_burst": 0, 00:23:29.153 "low_priority_weight": 0, 00:23:29.153 "medium_priority_weight": 0, 00:23:29.153 "high_priority_weight": 0, 00:23:29.153 "nvme_adminq_poll_period_us": 10000, 00:23:29.153 "nvme_ioq_poll_period_us": 0, 00:23:29.153 "io_queue_requests": 512, 00:23:29.153 "delay_cmd_submit": true, 00:23:29.153 "transport_retry_count": 4, 00:23:29.153 "bdev_retry_count": 3, 00:23:29.153 "transport_ack_timeout": 0, 00:23:29.153 "ctrlr_loss_timeout_sec": 0, 00:23:29.153 "reconnect_delay_sec": 0, 00:23:29.153 "fast_io_fail_timeout_sec": 0, 00:23:29.153 "disable_auto_failback": false, 00:23:29.153 "generate_uuids": false, 00:23:29.153 "transport_tos": 0, 00:23:29.153 "nvme_error_stat": false, 00:23:29.153 "rdma_srq_size": 0, 00:23:29.153 "io_path_stat": false, 00:23:29.153 "allow_accel_sequence": false, 00:23:29.153 "rdma_max_cq_size": 0, 00:23:29.153 "rdma_cm_event_timeout_ms": 0, 00:23:29.153 "dhchap_digests": [ 00:23:29.153 "sha256", 00:23:29.153 "sha384", 00:23:29.153 "sha512" 00:23:29.153 ], 00:23:29.153 "dhchap_dhgroups": [ 00:23:29.153 "null", 00:23:29.153 "ffdhe2048", 00:23:29.153 "ffdhe3072", 00:23:29.153 "ffdhe4096", 00:23:29.153 "ffdhe6144", 00:23:29.153 "ffdhe8192" 00:23:29.153 ], 00:23:29.153 "rdma_umr_per_io": false 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "bdev_nvme_attach_controller", 00:23:29.153 "params": { 00:23:29.153 "name": "TLSTEST", 00:23:29.153 "trtype": "TCP", 00:23:29.153 "adrfam": "IPv4", 00:23:29.153 "traddr": "10.0.0.2", 00:23:29.153 "trsvcid": "4420", 00:23:29.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.153 "prchk_reftag": false, 00:23:29.153 "prchk_guard": false, 00:23:29.153 "ctrlr_loss_timeout_sec": 0, 00:23:29.153 "reconnect_delay_sec": 0, 00:23:29.153 "fast_io_fail_timeout_sec": 0, 00:23:29.153 "psk": "key0", 00:23:29.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.153 "hdgst": false, 00:23:29.153 "ddgst": false, 00:23:29.153 "multipath": "multipath" 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "bdev_nvme_set_hotplug", 00:23:29.153 "params": { 00:23:29.153 "period_us": 100000, 00:23:29.153 "enable": false 00:23:29.153 } 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "method": "bdev_wait_for_examine" 00:23:29.153 } 00:23:29.153 ] 00:23:29.153 }, 00:23:29.153 { 00:23:29.153 "subsystem": "nbd", 00:23:29.153 "config": [] 00:23:29.153 } 00:23:29.153 ] 00:23:29.153 }' 00:23:29.154 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.154 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.154 [2024-12-13 12:28:56.664401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:29.154 [2024-12-13 12:28:56.664451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354837 ] 00:23:29.154 [2024-12-13 12:28:56.738277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.154 [2024-12-13 12:28:56.760665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.412 [2024-12-13 12:28:56.908441] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.980 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.980 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.980 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:29.980 Running I/O for 10 seconds... 00:23:32.295 4737.00 IOPS, 18.50 MiB/s [2024-12-13T11:29:00.929Z] 5075.00 IOPS, 19.82 MiB/s [2024-12-13T11:29:01.864Z] 5252.67 IOPS, 20.52 MiB/s [2024-12-13T11:29:02.801Z] 5169.75 IOPS, 20.19 MiB/s [2024-12-13T11:29:03.737Z] 5167.40 IOPS, 20.19 MiB/s [2024-12-13T11:29:04.673Z] 5139.83 IOPS, 20.08 MiB/s [2024-12-13T11:29:06.053Z] 5109.57 IOPS, 19.96 MiB/s [2024-12-13T11:29:06.621Z] 5094.12 IOPS, 19.90 MiB/s [2024-12-13T11:29:07.999Z] 5062.44 IOPS, 19.78 MiB/s [2024-12-13T11:29:07.999Z] 5072.40 IOPS, 19.81 MiB/s 00:23:40.299 Latency(us) 00:23:40.299 [2024-12-13T11:29:07.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.299 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.299 Verification LBA range: start 0x0 length 0x2000 00:23:40.299 TLSTESTn1 : 10.02 5076.51 19.83 0.00 0.00 25177.45 6241.52 44189.99 00:23:40.299 [2024-12-13T11:29:07.999Z] =================================================================================================================== 00:23:40.299 [2024-12-13T11:29:07.999Z] Total : 5076.51 19.83 0.00 0.00 25177.45 6241.52 44189.99 00:23:40.299 { 00:23:40.299 "results": [ 00:23:40.299 { 00:23:40.299 "job": "TLSTESTn1", 00:23:40.299 "core_mask": "0x4", 00:23:40.299 "workload": "verify", 00:23:40.299 "status": "finished", 00:23:40.299 "verify_range": { 00:23:40.299 "start": 0, 00:23:40.299 "length": 8192 00:23:40.299 }, 00:23:40.299 "queue_depth": 128, 00:23:40.299 "io_size": 4096, 00:23:40.299 "runtime": 10.016918, 00:23:40.299 "iops": 5076.511557746604, 00:23:40.299 "mibps": 19.830123272447672, 00:23:40.299 "io_failed": 0, 00:23:40.299 "io_timeout": 0, 00:23:40.299 "avg_latency_us": 25177.452583242735, 00:23:40.299 "min_latency_us": 6241.523809523809, 00:23:40.299 "max_latency_us": 44189.98857142857 00:23:40.299 } 00:23:40.299 ], 00:23:40.299 "core_count": 1 00:23:40.299 } 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 354837 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354837 ']' 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354837 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354837 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354837' 00:23:40.299 killing process with pid 354837 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354837 00:23:40.299 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.299 00:23:40.299 Latency(us) 00:23:40.299 [2024-12-13T11:29:07.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.299 [2024-12-13T11:29:07.999Z] =================================================================================================================== 00:23:40.299 [2024-12-13T11:29:07.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354837 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 354706 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354706 ']' 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354706 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354706 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354706' 00:23:40.299 killing process with pid 354706 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354706 00:23:40.299 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354706 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=356626 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 356626 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 356626 ']' 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.558 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.558 [2024-12-13 12:29:08.130836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:40.558 [2024-12-13 12:29:08.130885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.558 [2024-12-13 12:29:08.207262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.558 [2024-12-13 12:29:08.227363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.559 [2024-12-13 12:29:08.227398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.559 [2024-12-13 12:29:08.227405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.559 [2024-12-13 12:29:08.227410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.559 [2024-12-13 12:29:08.227415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.559 [2024-12-13 12:29:08.227935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.n1r6DmqXmb 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n1r6DmqXmb 00:23:40.818 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.076 [2024-12-13 12:29:08.530971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.077 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.077 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.335 [2024-12-13 12:29:08.891892] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.335 [2024-12-13 12:29:08.892090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.336 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.594 malloc0 00:23:41.595 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:41.595 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:41.854 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=356951 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 356951 /var/tmp/bdevperf.sock 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 356951 ']' 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.113 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.113 [2024-12-13 12:29:09.687691] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:42.113 [2024-12-13 12:29:09.687744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356951 ] 00:23:42.113 [2024-12-13 12:29:09.761591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.113 [2024-12-13 12:29:09.783354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.374 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.374 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.374 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:42.374 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:42.633 [2024-12-13 12:29:10.238347] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.633 nvme0n1 00:23:42.633 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.891 Running I/O for 1 seconds... 00:23:43.829 5255.00 IOPS, 20.53 MiB/s 00:23:43.829 Latency(us) 00:23:43.829 [2024-12-13T11:29:11.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.829 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:43.829 Verification LBA range: start 0x0 length 0x2000 00:23:43.829 nvme0n1 : 1.02 5283.03 20.64 0.00 0.00 24004.57 4837.18 33454.57 00:23:43.829 [2024-12-13T11:29:11.529Z] =================================================================================================================== 00:23:43.829 [2024-12-13T11:29:11.529Z] Total : 5283.03 20.64 0.00 0.00 24004.57 4837.18 33454.57 00:23:43.829 { 00:23:43.829 "results": [ 00:23:43.829 { 00:23:43.829 "job": "nvme0n1", 00:23:43.829 "core_mask": "0x2", 00:23:43.829 "workload": "verify", 00:23:43.829 "status": "finished", 00:23:43.829 "verify_range": { 00:23:43.829 "start": 0, 00:23:43.829 "length": 8192 00:23:43.829 }, 00:23:43.829 "queue_depth": 128, 00:23:43.829 "io_size": 4096, 00:23:43.829 "runtime": 1.019113, 00:23:43.829 "iops": 5283.025532988, 00:23:43.829 "mibps": 20.636818488234375, 00:23:43.829 "io_failed": 0, 00:23:43.829 "io_timeout": 0, 00:23:43.829 "avg_latency_us": 24004.574236184817, 00:23:43.829 "min_latency_us": 4837.1809523809525, 00:23:43.829 "max_latency_us": 33454.56761904762 00:23:43.829 } 00:23:43.829 ], 00:23:43.829 "core_count": 1 00:23:43.829 } 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 356951 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 356951 ']' 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 356951 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356951 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356951' 00:23:43.829 killing process with pid 356951 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 356951 00:23:43.829 Received shutdown signal, test time was about 1.000000 seconds 00:23:43.829 00:23:43.829 Latency(us) 00:23:43.829 [2024-12-13T11:29:11.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.829 [2024-12-13T11:29:11.529Z] =================================================================================================================== 00:23:43.829 [2024-12-13T11:29:11.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.829 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 356951 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 356626 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 356626 ']' 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 356626 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356626 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356626' 00:23:44.089 killing process with pid 356626 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 356626 00:23:44.089 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 356626 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357334 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357334 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357334 ']' 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.348 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 [2024-12-13 12:29:11.933773] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:44.348 [2024-12-13 12:29:11.933825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.348 [2024-12-13 12:29:12.010502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.348 [2024-12-13 12:29:12.027931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.348 [2024-12-13 12:29:12.027968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.348 [2024-12-13 12:29:12.027975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.348 [2024-12-13 12:29:12.027981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.348 [2024-12-13 12:29:12.027985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.348 [2024-12-13 12:29:12.028491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.607 [2024-12-13 12:29:12.170860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.607 malloc0 00:23:44.607 [2024-12-13 12:29:12.198966] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.607 [2024-12-13 12:29:12.199157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=357356 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 357356 /var/tmp/bdevperf.sock 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357356 ']' 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.607 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.608 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.608 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.608 [2024-12-13 12:29:12.271528] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:44.608 [2024-12-13 12:29:12.271567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357356 ] 00:23:44.866 [2024-12-13 12:29:12.345190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.866 [2024-12-13 12:29:12.367582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.866 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.866 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.866 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1r6DmqXmb 00:23:45.124 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:45.124 [2024-12-13 12:29:12.810299] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.381 nvme0n1 00:23:45.381 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.381 Running I/O for 1 seconds... 00:23:46.317 4957.00 IOPS, 19.36 MiB/s 00:23:46.317 Latency(us) 00:23:46.317 [2024-12-13T11:29:14.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.317 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:46.317 Verification LBA range: start 0x0 length 0x2000 00:23:46.317 nvme0n1 : 1.01 5019.31 19.61 0.00 0.00 25329.45 5804.62 41693.38 00:23:46.317 [2024-12-13T11:29:14.017Z] =================================================================================================================== 00:23:46.317 [2024-12-13T11:29:14.017Z] Total : 5019.31 19.61 0.00 0.00 25329.45 5804.62 41693.38 00:23:46.317 { 00:23:46.317 "results": [ 00:23:46.317 { 00:23:46.317 "job": "nvme0n1", 00:23:46.317 "core_mask": "0x2", 00:23:46.317 "workload": "verify", 00:23:46.317 "status": "finished", 00:23:46.317 "verify_range": { 00:23:46.317 "start": 0, 00:23:46.317 "length": 8192 00:23:46.317 }, 00:23:46.317 "queue_depth": 128, 00:23:46.317 "io_size": 4096, 00:23:46.317 "runtime": 1.013087, 00:23:46.317 "iops": 5019.312260447523, 00:23:46.317 "mibps": 19.606688517373136, 00:23:46.317 "io_failed": 0, 00:23:46.317 "io_timeout": 0, 00:23:46.317 "avg_latency_us": 25329.45073259353, 00:23:46.317 "min_latency_us": 5804.617142857142, 00:23:46.317 "max_latency_us": 41693.37904761905 00:23:46.317 } 00:23:46.317 ], 00:23:46.317 "core_count": 1 00:23:46.317 } 00:23:46.317 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:46.317 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.317 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.576 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.576 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:46.576 "subsystems": [ 00:23:46.576 { 00:23:46.576 "subsystem": "keyring", 00:23:46.576 "config": [ 00:23:46.576 { 00:23:46.576 "method": "keyring_file_add_key", 00:23:46.576 "params": { 00:23:46.576 "name": "key0", 00:23:46.576 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:46.576 } 00:23:46.576 } 00:23:46.576 ] 00:23:46.576 }, 00:23:46.576 { 00:23:46.576 "subsystem": "iobuf", 00:23:46.576 "config": [ 00:23:46.576 { 00:23:46.576 "method": "iobuf_set_options", 00:23:46.576 "params": { 00:23:46.577 "small_pool_count": 8192, 00:23:46.577 "large_pool_count": 1024, 00:23:46.577 "small_bufsize": 8192, 00:23:46.577 "large_bufsize": 135168, 00:23:46.577 "enable_numa": false 00:23:46.577 } 00:23:46.577 } 00:23:46.577 ] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "sock", 00:23:46.577 "config": [ 00:23:46.577 { 00:23:46.577 "method": "sock_set_default_impl", 00:23:46.577 "params": { 00:23:46.577 "impl_name": "posix" 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "sock_impl_set_options", 00:23:46.577 "params": { 00:23:46.577 "impl_name": "ssl", 00:23:46.577 "recv_buf_size": 4096, 00:23:46.577 "send_buf_size": 4096, 00:23:46.577 "enable_recv_pipe": true, 00:23:46.577 "enable_quickack": false, 00:23:46.577 "enable_placement_id": 0, 00:23:46.577 "enable_zerocopy_send_server": true, 00:23:46.577 "enable_zerocopy_send_client": false, 00:23:46.577 "zerocopy_threshold": 0, 00:23:46.577 "tls_version": 0, 00:23:46.577 "enable_ktls": false 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "sock_impl_set_options", 00:23:46.577 "params": { 00:23:46.577 "impl_name": "posix", 00:23:46.577 "recv_buf_size": 2097152, 00:23:46.577 "send_buf_size": 2097152, 00:23:46.577 "enable_recv_pipe": true, 00:23:46.577 "enable_quickack": false, 00:23:46.577 "enable_placement_id": 0, 00:23:46.577 "enable_zerocopy_send_server": true, 00:23:46.577 "enable_zerocopy_send_client": false, 00:23:46.577 "zerocopy_threshold": 0, 00:23:46.577 "tls_version": 0, 00:23:46.577 "enable_ktls": false 00:23:46.577 } 00:23:46.577 } 00:23:46.577 ] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "vmd", 00:23:46.577 "config": [] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "accel", 00:23:46.577 "config": [ 00:23:46.577 { 00:23:46.577 "method": "accel_set_options", 00:23:46.577 "params": { 00:23:46.577 "small_cache_size": 128, 00:23:46.577 "large_cache_size": 16, 00:23:46.577 "task_count": 2048, 00:23:46.577 "sequence_count": 2048, 00:23:46.577 "buf_count": 2048 00:23:46.577 } 00:23:46.577 } 00:23:46.577 ] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "bdev", 00:23:46.577 "config": [ 00:23:46.577 { 00:23:46.577 "method": "bdev_set_options", 00:23:46.577 "params": { 00:23:46.577 "bdev_io_pool_size": 65535, 00:23:46.577 "bdev_io_cache_size": 256, 00:23:46.577 "bdev_auto_examine": true, 00:23:46.577 "iobuf_small_cache_size": 128, 00:23:46.577 "iobuf_large_cache_size": 16 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "bdev_raid_set_options", 00:23:46.577 "params": { 00:23:46.577 "process_window_size_kb": 1024, 00:23:46.577 "process_max_bandwidth_mb_sec": 0 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "bdev_iscsi_set_options", 00:23:46.577 "params": { 00:23:46.577 "timeout_sec": 30 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "bdev_nvme_set_options", 00:23:46.577 "params": { 00:23:46.577 "action_on_timeout": "none", 00:23:46.577 "timeout_us": 0, 00:23:46.577 "timeout_admin_us": 0, 00:23:46.577 "keep_alive_timeout_ms": 10000, 00:23:46.577 "arbitration_burst": 0, 00:23:46.577 "low_priority_weight": 0, 00:23:46.577 "medium_priority_weight": 0, 00:23:46.577 "high_priority_weight": 0, 00:23:46.577 "nvme_adminq_poll_period_us": 10000, 00:23:46.577 "nvme_ioq_poll_period_us": 0, 00:23:46.577 "io_queue_requests": 0, 00:23:46.577 "delay_cmd_submit": true, 00:23:46.577 "transport_retry_count": 4, 00:23:46.577 "bdev_retry_count": 3, 00:23:46.577 "transport_ack_timeout": 0, 00:23:46.577 "ctrlr_loss_timeout_sec": 0, 00:23:46.577 "reconnect_delay_sec": 0, 00:23:46.577 "fast_io_fail_timeout_sec": 0, 00:23:46.577 "disable_auto_failback": false, 00:23:46.577 "generate_uuids": false, 00:23:46.577 "transport_tos": 0, 00:23:46.577 "nvme_error_stat": false, 00:23:46.577 "rdma_srq_size": 0, 00:23:46.577 "io_path_stat": false, 00:23:46.577 "allow_accel_sequence": false, 00:23:46.577 "rdma_max_cq_size": 0, 00:23:46.577 "rdma_cm_event_timeout_ms": 0, 00:23:46.577 "dhchap_digests": [ 00:23:46.577 "sha256", 00:23:46.577 "sha384", 00:23:46.577 "sha512" 00:23:46.577 ], 00:23:46.577 "dhchap_dhgroups": [ 00:23:46.577 "null", 00:23:46.577 "ffdhe2048", 00:23:46.577 "ffdhe3072", 00:23:46.577 "ffdhe4096", 00:23:46.577 "ffdhe6144", 00:23:46.577 "ffdhe8192" 00:23:46.577 ], 00:23:46.577 "rdma_umr_per_io": false 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "bdev_nvme_set_hotplug", 00:23:46.577 "params": { 00:23:46.577 "period_us": 100000, 00:23:46.577 "enable": false 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "bdev_malloc_create", 00:23:46.577 "params": { 00:23:46.577 "name": "malloc0", 00:23:46.577 "num_blocks": 8192, 00:23:46.577 "block_size": 4096, 00:23:46.577 "physical_block_size": 4096, 00:23:46.577 "uuid": "1bdefccd-90cb-4c70-bfce-b9f11ecb3314", 00:23:46.577 "optimal_io_boundary": 0, 00:23:46.577 "md_size": 0, 00:23:46.577 "dif_type": 0, 00:23:46.577 "dif_is_head_of_md": false, 00:23:46.577 "dif_pi_format": 0 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "bdev_wait_for_examine" 00:23:46.577 } 00:23:46.577 ] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "nbd", 00:23:46.577 "config": [] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "scheduler", 00:23:46.577 "config": [ 00:23:46.577 { 00:23:46.577 "method": "framework_set_scheduler", 00:23:46.577 "params": { 00:23:46.577 "name": "static" 00:23:46.577 } 00:23:46.577 } 00:23:46.577 ] 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "subsystem": "nvmf", 00:23:46.577 "config": [ 00:23:46.577 { 00:23:46.577 "method": "nvmf_set_config", 00:23:46.577 "params": { 00:23:46.577 "discovery_filter": "match_any", 00:23:46.577 "admin_cmd_passthru": { 00:23:46.577 "identify_ctrlr": false 00:23:46.577 }, 00:23:46.577 "dhchap_digests": [ 00:23:46.577 "sha256", 00:23:46.577 "sha384", 00:23:46.577 "sha512" 00:23:46.577 ], 00:23:46.577 "dhchap_dhgroups": [ 00:23:46.577 "null", 00:23:46.577 "ffdhe2048", 00:23:46.577 "ffdhe3072", 00:23:46.577 "ffdhe4096", 00:23:46.577 "ffdhe6144", 00:23:46.577 "ffdhe8192" 00:23:46.577 ] 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "nvmf_set_max_subsystems", 00:23:46.577 "params": { 00:23:46.577 "max_subsystems": 1024 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "nvmf_set_crdt", 00:23:46.577 "params": { 00:23:46.577 "crdt1": 0, 00:23:46.577 "crdt2": 0, 00:23:46.577 "crdt3": 0 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "nvmf_create_transport", 00:23:46.577 "params": { 00:23:46.577 "trtype": "TCP", 00:23:46.577 "max_queue_depth": 128, 00:23:46.577 "max_io_qpairs_per_ctrlr": 127, 00:23:46.577 "in_capsule_data_size": 4096, 00:23:46.577 "max_io_size": 131072, 00:23:46.577 "io_unit_size": 131072, 00:23:46.577 "max_aq_depth": 128, 00:23:46.577 "num_shared_buffers": 511, 00:23:46.577 "buf_cache_size": 4294967295, 00:23:46.577 "dif_insert_or_strip": false, 00:23:46.577 "zcopy": false, 00:23:46.577 "c2h_success": false, 00:23:46.577 "sock_priority": 0, 00:23:46.577 "abort_timeout_sec": 1, 00:23:46.577 "ack_timeout": 0, 00:23:46.577 "data_wr_pool_size": 0 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.577 "method": "nvmf_create_subsystem", 00:23:46.577 "params": { 00:23:46.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.577 "allow_any_host": false, 00:23:46.577 "serial_number": "00000000000000000000", 00:23:46.577 "model_number": "SPDK bdev Controller", 00:23:46.577 "max_namespaces": 32, 00:23:46.577 "min_cntlid": 1, 00:23:46.577 "max_cntlid": 65519, 00:23:46.577 "ana_reporting": false 00:23:46.577 } 00:23:46.577 }, 00:23:46.577 { 00:23:46.578 "method": "nvmf_subsystem_add_host", 00:23:46.578 "params": { 00:23:46.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.578 "host": "nqn.2016-06.io.spdk:host1", 00:23:46.578 "psk": "key0" 00:23:46.578 } 00:23:46.578 }, 00:23:46.578 { 00:23:46.578 "method": "nvmf_subsystem_add_ns", 00:23:46.578 "params": { 00:23:46.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.578 "namespace": { 00:23:46.578 "nsid": 1, 00:23:46.578 "bdev_name": "malloc0", 00:23:46.578 "nguid": "1BDEFCCD90CB4C70BFCEB9F11ECB3314", 00:23:46.578 "uuid": "1bdefccd-90cb-4c70-bfce-b9f11ecb3314", 00:23:46.578 "no_auto_visible": false 00:23:46.578 } 00:23:46.578 } 00:23:46.578 }, 00:23:46.578 { 00:23:46.578 "method": "nvmf_subsystem_add_listener", 00:23:46.578 "params": { 00:23:46.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.578 "listen_address": { 00:23:46.578 "trtype": "TCP", 00:23:46.578 "adrfam": "IPv4", 00:23:46.578 "traddr": "10.0.0.2", 00:23:46.578 "trsvcid": "4420" 00:23:46.578 }, 00:23:46.578 "secure_channel": false, 00:23:46.578 "sock_impl": "ssl" 00:23:46.578 } 00:23:46.578 } 00:23:46.578 ] 00:23:46.578 } 00:23:46.578 ] 00:23:46.578 }' 00:23:46.578 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:46.837 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:46.837 "subsystems": [ 00:23:46.837 { 00:23:46.837 "subsystem": "keyring", 00:23:46.837 "config": [ 00:23:46.837 { 00:23:46.837 "method": "keyring_file_add_key", 00:23:46.837 "params": { 00:23:46.837 "name": "key0", 00:23:46.837 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:46.837 } 00:23:46.837 } 00:23:46.837 ] 00:23:46.837 }, 00:23:46.837 { 00:23:46.837 "subsystem": "iobuf", 00:23:46.837 "config": [ 00:23:46.837 { 00:23:46.837 "method": "iobuf_set_options", 00:23:46.837 "params": { 00:23:46.837 "small_pool_count": 8192, 00:23:46.837 "large_pool_count": 1024, 00:23:46.837 "small_bufsize": 8192, 00:23:46.837 "large_bufsize": 135168, 00:23:46.837 "enable_numa": false 00:23:46.837 } 00:23:46.837 } 00:23:46.837 ] 00:23:46.837 }, 00:23:46.837 { 00:23:46.837 "subsystem": "sock", 00:23:46.837 "config": [ 00:23:46.837 { 00:23:46.837 "method": "sock_set_default_impl", 00:23:46.837 "params": { 00:23:46.837 "impl_name": "posix" 00:23:46.837 } 00:23:46.837 }, 00:23:46.837 { 00:23:46.837 "method": "sock_impl_set_options", 00:23:46.837 "params": { 00:23:46.837 "impl_name": "ssl", 00:23:46.837 "recv_buf_size": 4096, 00:23:46.837 "send_buf_size": 4096, 00:23:46.837 "enable_recv_pipe": true, 00:23:46.837 "enable_quickack": false, 00:23:46.837 "enable_placement_id": 0, 00:23:46.837 "enable_zerocopy_send_server": true, 00:23:46.837 "enable_zerocopy_send_client": false, 00:23:46.837 "zerocopy_threshold": 0, 00:23:46.837 "tls_version": 0, 00:23:46.837 "enable_ktls": false 00:23:46.837 } 00:23:46.837 }, 00:23:46.837 { 00:23:46.837 "method": "sock_impl_set_options", 00:23:46.837 "params": { 00:23:46.837 "impl_name": "posix", 00:23:46.837 "recv_buf_size": 2097152, 00:23:46.837 "send_buf_size": 2097152, 00:23:46.837 "enable_recv_pipe": true, 00:23:46.837 "enable_quickack": false, 00:23:46.837 "enable_placement_id": 0, 00:23:46.837 "enable_zerocopy_send_server": true, 00:23:46.837 "enable_zerocopy_send_client": false, 00:23:46.837 "zerocopy_threshold": 0, 00:23:46.837 "tls_version": 0, 00:23:46.837 "enable_ktls": false 00:23:46.837 } 00:23:46.837 } 00:23:46.837 ] 00:23:46.837 }, 00:23:46.837 { 00:23:46.837 "subsystem": "vmd", 00:23:46.837 "config": [] 00:23:46.837 }, 00:23:46.837 { 00:23:46.837 "subsystem": "accel", 00:23:46.837 "config": [ 00:23:46.837 { 00:23:46.837 "method": "accel_set_options", 00:23:46.837 "params": { 00:23:46.837 "small_cache_size": 128, 00:23:46.838 "large_cache_size": 16, 00:23:46.838 "task_count": 2048, 00:23:46.838 "sequence_count": 2048, 00:23:46.838 "buf_count": 2048 00:23:46.838 } 00:23:46.838 } 00:23:46.838 ] 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "subsystem": "bdev", 00:23:46.838 "config": [ 00:23:46.838 { 00:23:46.838 "method": "bdev_set_options", 00:23:46.838 "params": { 00:23:46.838 "bdev_io_pool_size": 65535, 00:23:46.838 "bdev_io_cache_size": 256, 00:23:46.838 "bdev_auto_examine": true, 00:23:46.838 "iobuf_small_cache_size": 128, 00:23:46.838 "iobuf_large_cache_size": 16 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_raid_set_options", 00:23:46.838 "params": { 00:23:46.838 "process_window_size_kb": 1024, 00:23:46.838 "process_max_bandwidth_mb_sec": 0 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_iscsi_set_options", 00:23:46.838 "params": { 00:23:46.838 "timeout_sec": 30 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_nvme_set_options", 00:23:46.838 "params": { 00:23:46.838 "action_on_timeout": "none", 00:23:46.838 "timeout_us": 0, 00:23:46.838 "timeout_admin_us": 0, 00:23:46.838 "keep_alive_timeout_ms": 10000, 00:23:46.838 "arbitration_burst": 0, 00:23:46.838 "low_priority_weight": 0, 00:23:46.838 "medium_priority_weight": 0, 00:23:46.838 "high_priority_weight": 0, 00:23:46.838 "nvme_adminq_poll_period_us": 10000, 00:23:46.838 "nvme_ioq_poll_period_us": 0, 00:23:46.838 "io_queue_requests": 512, 00:23:46.838 "delay_cmd_submit": true, 00:23:46.838 "transport_retry_count": 4, 00:23:46.838 "bdev_retry_count": 3, 00:23:46.838 "transport_ack_timeout": 0, 00:23:46.838 "ctrlr_loss_timeout_sec": 0, 00:23:46.838 "reconnect_delay_sec": 0, 00:23:46.838 "fast_io_fail_timeout_sec": 0, 00:23:46.838 "disable_auto_failback": false, 00:23:46.838 "generate_uuids": false, 00:23:46.838 "transport_tos": 0, 00:23:46.838 "nvme_error_stat": false, 00:23:46.838 "rdma_srq_size": 0, 00:23:46.838 "io_path_stat": false, 00:23:46.838 "allow_accel_sequence": false, 00:23:46.838 "rdma_max_cq_size": 0, 00:23:46.838 "rdma_cm_event_timeout_ms": 0, 00:23:46.838 "dhchap_digests": [ 00:23:46.838 "sha256", 00:23:46.838 "sha384", 00:23:46.838 "sha512" 00:23:46.838 ], 00:23:46.838 "dhchap_dhgroups": [ 00:23:46.838 "null", 00:23:46.838 "ffdhe2048", 00:23:46.838 "ffdhe3072", 00:23:46.838 "ffdhe4096", 00:23:46.838 "ffdhe6144", 00:23:46.838 "ffdhe8192" 00:23:46.838 ], 00:23:46.838 "rdma_umr_per_io": false 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_nvme_attach_controller", 00:23:46.838 "params": { 00:23:46.838 "name": "nvme0", 00:23:46.838 "trtype": "TCP", 00:23:46.838 "adrfam": "IPv4", 00:23:46.838 "traddr": "10.0.0.2", 00:23:46.838 "trsvcid": "4420", 00:23:46.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.838 "prchk_reftag": false, 00:23:46.838 "prchk_guard": false, 00:23:46.838 "ctrlr_loss_timeout_sec": 0, 00:23:46.838 "reconnect_delay_sec": 0, 00:23:46.838 "fast_io_fail_timeout_sec": 0, 00:23:46.838 "psk": "key0", 00:23:46.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.838 "hdgst": false, 00:23:46.838 "ddgst": false, 00:23:46.838 "multipath": "multipath" 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_nvme_set_hotplug", 00:23:46.838 "params": { 00:23:46.838 "period_us": 100000, 00:23:46.838 "enable": false 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_enable_histogram", 00:23:46.838 "params": { 00:23:46.838 "name": "nvme0n1", 00:23:46.838 "enable": true 00:23:46.838 } 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "method": "bdev_wait_for_examine" 00:23:46.838 } 00:23:46.838 ] 00:23:46.838 }, 00:23:46.838 { 00:23:46.838 "subsystem": "nbd", 00:23:46.838 "config": [] 00:23:46.838 } 00:23:46.838 ] 00:23:46.838 }' 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 357356 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357356 ']' 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357356 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357356 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357356' 00:23:46.838 killing process with pid 357356 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357356 00:23:46.838 Received shutdown signal, test time was about 1.000000 seconds 00:23:46.838 00:23:46.838 Latency(us) 00:23:46.838 [2024-12-13T11:29:14.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.838 [2024-12-13T11:29:14.538Z] =================================================================================================================== 00:23:46.838 [2024-12-13T11:29:14.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.838 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357356 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 357334 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357334 ']' 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357334 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357334 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357334' 00:23:47.097 killing process with pid 357334 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357334 00:23:47.097 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357334 00:23:47.357 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:47.357 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.357 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.357 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:47.357 "subsystems": [ 00:23:47.357 { 00:23:47.357 "subsystem": "keyring", 00:23:47.357 "config": [ 00:23:47.357 { 00:23:47.357 "method": "keyring_file_add_key", 00:23:47.357 "params": { 00:23:47.357 "name": "key0", 00:23:47.357 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:47.357 } 00:23:47.357 } 00:23:47.357 ] 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "subsystem": "iobuf", 00:23:47.357 "config": [ 00:23:47.357 { 00:23:47.357 "method": "iobuf_set_options", 00:23:47.357 "params": { 00:23:47.357 "small_pool_count": 8192, 00:23:47.357 "large_pool_count": 1024, 00:23:47.357 "small_bufsize": 8192, 00:23:47.357 "large_bufsize": 135168, 00:23:47.357 "enable_numa": false 00:23:47.357 } 00:23:47.357 } 00:23:47.357 ] 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "subsystem": "sock", 00:23:47.357 "config": [ 00:23:47.357 { 00:23:47.357 "method": "sock_set_default_impl", 00:23:47.357 "params": { 00:23:47.357 "impl_name": "posix" 00:23:47.357 } 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "method": "sock_impl_set_options", 00:23:47.357 "params": { 00:23:47.357 "impl_name": "ssl", 00:23:47.357 "recv_buf_size": 4096, 00:23:47.357 "send_buf_size": 4096, 00:23:47.357 "enable_recv_pipe": true, 00:23:47.357 "enable_quickack": false, 00:23:47.357 "enable_placement_id": 0, 00:23:47.357 "enable_zerocopy_send_server": true, 00:23:47.357 "enable_zerocopy_send_client": false, 00:23:47.357 "zerocopy_threshold": 0, 00:23:47.357 "tls_version": 0, 00:23:47.357 "enable_ktls": false 00:23:47.357 } 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "method": "sock_impl_set_options", 00:23:47.357 "params": { 00:23:47.357 "impl_name": "posix", 00:23:47.357 "recv_buf_size": 2097152, 00:23:47.357 "send_buf_size": 2097152, 00:23:47.357 "enable_recv_pipe": true, 00:23:47.357 "enable_quickack": false, 00:23:47.357 "enable_placement_id": 0, 00:23:47.357 "enable_zerocopy_send_server": true, 00:23:47.357 "enable_zerocopy_send_client": false, 00:23:47.357 "zerocopy_threshold": 0, 00:23:47.357 "tls_version": 0, 00:23:47.357 "enable_ktls": false 00:23:47.357 } 00:23:47.357 } 00:23:47.357 ] 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "subsystem": "vmd", 00:23:47.357 "config": [] 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "subsystem": "accel", 00:23:47.357 "config": [ 00:23:47.357 { 00:23:47.357 "method": "accel_set_options", 00:23:47.357 "params": { 00:23:47.357 "small_cache_size": 128, 00:23:47.357 "large_cache_size": 16, 00:23:47.357 "task_count": 2048, 00:23:47.357 "sequence_count": 2048, 00:23:47.357 "buf_count": 2048 00:23:47.357 } 00:23:47.357 } 00:23:47.357 ] 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "subsystem": "bdev", 00:23:47.357 "config": [ 00:23:47.357 { 00:23:47.357 "method": "bdev_set_options", 00:23:47.357 "params": { 00:23:47.357 "bdev_io_pool_size": 65535, 00:23:47.357 "bdev_io_cache_size": 256, 00:23:47.357 "bdev_auto_examine": true, 00:23:47.357 "iobuf_small_cache_size": 128, 00:23:47.357 "iobuf_large_cache_size": 16 00:23:47.357 } 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "method": "bdev_raid_set_options", 00:23:47.357 "params": { 00:23:47.357 "process_window_size_kb": 1024, 00:23:47.357 "process_max_bandwidth_mb_sec": 0 00:23:47.357 } 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "method": "bdev_iscsi_set_options", 00:23:47.357 "params": { 00:23:47.357 "timeout_sec": 30 00:23:47.357 } 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "method": "bdev_nvme_set_options", 00:23:47.357 "params": { 00:23:47.357 "action_on_timeout": "none", 00:23:47.357 "timeout_us": 0, 00:23:47.357 "timeout_admin_us": 0, 00:23:47.357 "keep_alive_timeout_ms": 10000, 00:23:47.357 "arbitration_burst": 0, 00:23:47.357 "low_priority_weight": 0, 00:23:47.357 "medium_priority_weight": 0, 00:23:47.357 "high_priority_weight": 0, 00:23:47.357 "nvme_adminq_poll_period_us": 10000, 00:23:47.357 "nvme_ioq_poll_period_us": 0, 00:23:47.357 "io_queue_requests": 0, 00:23:47.357 "delay_cmd_submit": true, 00:23:47.357 "transport_retry_count": 4, 00:23:47.357 "bdev_retry_count": 3, 00:23:47.357 "transport_ack_timeout": 0, 00:23:47.358 "ctrlr_loss_timeout_sec": 0, 00:23:47.358 "reconnect_delay_sec": 0, 00:23:47.358 "fast_io_fail_timeout_sec": 0, 00:23:47.358 "disable_auto_failback": false, 00:23:47.358 "generate_uuids": false, 00:23:47.358 "transport_tos": 0, 00:23:47.358 "nvme_error_stat": false, 00:23:47.358 "rdma_srq_size": 0, 00:23:47.358 "io_path_stat": false, 00:23:47.358 "allow_accel_sequence": false, 00:23:47.358 "rdma_max_cq_size": 0, 00:23:47.358 "rdma_cm_event_timeout_ms": 0, 00:23:47.358 "dhchap_digests": [ 00:23:47.358 "sha256", 00:23:47.358 "sha384", 00:23:47.358 "sha512" 00:23:47.358 ], 00:23:47.358 "dhchap_dhgroups": [ 00:23:47.358 "null", 00:23:47.358 "ffdhe2048", 00:23:47.358 "ffdhe3072", 00:23:47.358 "ffdhe4096", 00:23:47.358 "ffdhe6144", 00:23:47.358 "ffdhe8192" 00:23:47.358 ], 00:23:47.358 "rdma_umr_per_io": false 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "bdev_nvme_set_hotplug", 00:23:47.358 "params": { 00:23:47.358 "period_us": 100000, 00:23:47.358 "enable": false 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "bdev_malloc_create", 00:23:47.358 "params": { 00:23:47.358 "name": "malloc0", 00:23:47.358 "num_blocks": 8192, 00:23:47.358 "block_size": 4096, 00:23:47.358 "physical_block_size": 4096, 00:23:47.358 "uuid": "1bdefccd-90cb-4c70-bfce-b9f11ecb3314", 00:23:47.358 "optimal_io_boundary": 0, 00:23:47.358 "md_size": 0, 00:23:47.358 "dif_type": 0, 00:23:47.358 "dif_is_head_of_md": false, 00:23:47.358 "dif_pi_format": 0 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "bdev_wait_for_examine" 00:23:47.358 } 00:23:47.358 ] 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "subsystem": "nbd", 00:23:47.358 "config": [] 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "subsystem": "scheduler", 00:23:47.358 "config": [ 00:23:47.358 { 00:23:47.358 "method": "framework_set_scheduler", 00:23:47.358 "params": { 00:23:47.358 "name": "static" 00:23:47.358 } 00:23:47.358 } 00:23:47.358 ] 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "subsystem": "nvmf", 00:23:47.358 "config": [ 00:23:47.358 { 00:23:47.358 "method": "nvmf_set_config", 00:23:47.358 "params": { 00:23:47.358 "discovery_filter": "match_any", 00:23:47.358 "admin_cmd_passthru": { 00:23:47.358 "identify_ctrlr": false 00:23:47.358 }, 00:23:47.358 "dhchap_digests": [ 00:23:47.358 "sha256", 00:23:47.358 "sha384", 00:23:47.358 "sha512" 00:23:47.358 ], 00:23:47.358 "dhchap_dhgroups": [ 00:23:47.358 "null", 00:23:47.358 "ffdhe2048", 00:23:47.358 "ffdhe3072", 00:23:47.358 "ffdhe4096", 00:23:47.358 "ffdhe6144", 00:23:47.358 "ffdhe8192" 00:23:47.358 ] 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_set_max_subsystems", 00:23:47.358 "params": { 00:23:47.358 "max_subsystems": 1024 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_set_crdt", 00:23:47.358 "params": { 00:23:47.358 "crdt1": 0, 00:23:47.358 "crdt2": 0, 00:23:47.358 "crdt3": 0 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_create_transport", 00:23:47.358 "params": { 00:23:47.358 "trtype": "TCP", 00:23:47.358 "max_queue_depth": 128, 00:23:47.358 "max_io_qpairs_per_ctrlr": 127, 00:23:47.358 "in_capsule_data_size": 4096, 00:23:47.358 "max_io_size": 131072, 00:23:47.358 "io_unit_size": 131072, 00:23:47.358 "max_aq_depth": 128, 00:23:47.358 "num_shared_buffers": 511, 00:23:47.358 "buf_cache_size": 4294967295, 00:23:47.358 "dif_insert_or_strip": false, 00:23:47.358 "zcopy": false, 00:23:47.358 "c2h_success": false, 00:23:47.358 "sock_priority": 0, 00:23:47.358 "abort_timeout_sec": 1, 00:23:47.358 "ack_timeout": 0, 00:23:47.358 "data_wr_pool_size": 0 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_create_subsystem", 00:23:47.358 "params": { 00:23:47.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.358 "allow_any_host": false, 00:23:47.358 "serial_number": "00000000000000000000", 00:23:47.358 "model_number": "SPDK bdev Controller", 00:23:47.358 "max_namespaces": 32, 00:23:47.358 "min_cntlid": 1, 00:23:47.358 "max_cntlid": 65519, 00:23:47.358 "ana_reporting": false 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_subsystem_add_host", 00:23:47.358 "params": { 00:23:47.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.358 "host": "nqn.2016-06.io.spdk:host1", 00:23:47.358 "psk": "key0" 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_subsystem_add_ns", 00:23:47.358 "params": { 00:23:47.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.358 "namespace": { 00:23:47.358 "nsid": 1, 00:23:47.358 "bdev_name": "malloc0", 00:23:47.358 "nguid": "1BDEFCCD90CB4C70BFCEB9F11ECB3314", 00:23:47.358 "uuid": "1bdefccd-90cb-4c70-bfce-b9f11ecb3314", 00:23:47.358 "no_auto_visible": false 00:23:47.358 } 00:23:47.358 } 00:23:47.358 }, 00:23:47.358 { 00:23:47.358 "method": "nvmf_subsystem_add_listener", 00:23:47.358 "params": { 00:23:47.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.358 "listen_address": { 00:23:47.358 "trtype": "TCP", 00:23:47.358 "adrfam": "IPv4", 00:23:47.358 "traddr": "10.0.0.2", 00:23:47.358 "trsvcid": "4420" 00:23:47.358 }, 00:23:47.358 "secure_channel": false, 00:23:47.358 "sock_impl": "ssl" 00:23:47.358 } 00:23:47.358 } 00:23:47.358 ] 00:23:47.358 } 00:23:47.358 ] 00:23:47.358 }' 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357820 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357820 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357820 ']' 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.358 [2024-12-13 12:29:14.860875] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:47.358 [2024-12-13 12:29:14.860919] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.358 [2024-12-13 12:29:14.933518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.358 [2024-12-13 12:29:14.954600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.358 [2024-12-13 12:29:14.954636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.358 [2024-12-13 12:29:14.954644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.358 [2024-12-13 12:29:14.954650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.358 [2024-12-13 12:29:14.954655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.358 [2024-12-13 12:29:14.955166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.617 [2024-12-13 12:29:15.162303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.617 [2024-12-13 12:29:15.194346] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.617 [2024-12-13 12:29:15.194536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=358053 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 358053 /var/tmp/bdevperf.sock 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358053 ']' 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.186 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:48.186 "subsystems": [ 00:23:48.186 { 00:23:48.186 "subsystem": "keyring", 00:23:48.186 "config": [ 00:23:48.186 { 00:23:48.186 "method": "keyring_file_add_key", 00:23:48.186 "params": { 00:23:48.186 "name": "key0", 00:23:48.186 "path": "/tmp/tmp.n1r6DmqXmb" 00:23:48.186 } 00:23:48.186 } 00:23:48.186 ] 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "subsystem": "iobuf", 00:23:48.186 "config": [ 00:23:48.186 { 00:23:48.186 "method": "iobuf_set_options", 00:23:48.186 "params": { 00:23:48.186 "small_pool_count": 8192, 00:23:48.186 "large_pool_count": 1024, 00:23:48.186 "small_bufsize": 8192, 00:23:48.186 "large_bufsize": 135168, 00:23:48.186 "enable_numa": false 00:23:48.186 } 00:23:48.186 } 00:23:48.186 ] 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "subsystem": "sock", 00:23:48.186 "config": [ 00:23:48.186 { 00:23:48.186 "method": "sock_set_default_impl", 00:23:48.186 "params": { 00:23:48.186 "impl_name": "posix" 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "sock_impl_set_options", 00:23:48.186 "params": { 00:23:48.186 "impl_name": "ssl", 00:23:48.186 "recv_buf_size": 4096, 00:23:48.186 "send_buf_size": 4096, 00:23:48.186 "enable_recv_pipe": true, 00:23:48.186 "enable_quickack": false, 00:23:48.186 "enable_placement_id": 0, 00:23:48.186 "enable_zerocopy_send_server": true, 00:23:48.186 "enable_zerocopy_send_client": false, 00:23:48.186 "zerocopy_threshold": 0, 00:23:48.186 "tls_version": 0, 00:23:48.186 "enable_ktls": false 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "sock_impl_set_options", 00:23:48.186 "params": { 00:23:48.186 "impl_name": "posix", 00:23:48.186 "recv_buf_size": 2097152, 00:23:48.186 "send_buf_size": 2097152, 00:23:48.186 "enable_recv_pipe": true, 00:23:48.186 "enable_quickack": false, 00:23:48.186 "enable_placement_id": 0, 00:23:48.186 "enable_zerocopy_send_server": true, 00:23:48.186 "enable_zerocopy_send_client": false, 00:23:48.186 "zerocopy_threshold": 0, 00:23:48.186 "tls_version": 0, 00:23:48.186 "enable_ktls": false 00:23:48.186 } 00:23:48.186 } 00:23:48.186 ] 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "subsystem": "vmd", 00:23:48.186 "config": [] 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "subsystem": "accel", 00:23:48.186 "config": [ 00:23:48.186 { 00:23:48.186 "method": "accel_set_options", 00:23:48.186 "params": { 00:23:48.186 "small_cache_size": 128, 00:23:48.186 "large_cache_size": 16, 00:23:48.186 "task_count": 2048, 00:23:48.186 "sequence_count": 2048, 00:23:48.186 "buf_count": 2048 00:23:48.186 } 00:23:48.186 } 00:23:48.186 ] 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "subsystem": "bdev", 00:23:48.186 "config": [ 00:23:48.186 { 00:23:48.186 "method": "bdev_set_options", 00:23:48.186 "params": { 00:23:48.186 "bdev_io_pool_size": 65535, 00:23:48.186 "bdev_io_cache_size": 256, 00:23:48.186 "bdev_auto_examine": true, 00:23:48.186 "iobuf_small_cache_size": 128, 00:23:48.186 "iobuf_large_cache_size": 16 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "bdev_raid_set_options", 00:23:48.186 "params": { 00:23:48.186 "process_window_size_kb": 1024, 00:23:48.186 "process_max_bandwidth_mb_sec": 0 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "bdev_iscsi_set_options", 00:23:48.186 "params": { 00:23:48.186 "timeout_sec": 30 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "bdev_nvme_set_options", 00:23:48.186 "params": { 00:23:48.186 "action_on_timeout": "none", 00:23:48.186 "timeout_us": 0, 00:23:48.186 "timeout_admin_us": 0, 00:23:48.186 "keep_alive_timeout_ms": 10000, 00:23:48.186 "arbitration_burst": 0, 00:23:48.186 "low_priority_weight": 0, 00:23:48.186 "medium_priority_weight": 0, 00:23:48.186 "high_priority_weight": 0, 00:23:48.186 "nvme_adminq_poll_period_us": 10000, 00:23:48.186 "nvme_ioq_poll_period_us": 0, 00:23:48.186 "io_queue_requests": 512, 00:23:48.186 "delay_cmd_submit": true, 00:23:48.186 "transport_retry_count": 4, 00:23:48.186 "bdev_retry_count": 3, 00:23:48.186 "transport_ack_timeout": 0, 00:23:48.186 "ctrlr_loss_timeout_sec": 0, 00:23:48.186 "reconnect_delay_sec": 0, 00:23:48.186 "fast_io_fail_timeout_sec": 0, 00:23:48.186 "disable_auto_failback": false, 00:23:48.186 "generate_uuids": false, 00:23:48.186 "transport_tos": 0, 00:23:48.186 "nvme_error_stat": false, 00:23:48.186 "rdma_srq_size": 0, 00:23:48.186 "io_path_stat": false, 00:23:48.186 "allow_accel_sequence": false, 00:23:48.186 "rdma_max_cq_size": 0, 00:23:48.186 "rdma_cm_event_timeout_ms": 0, 00:23:48.186 "dhchap_digests": [ 00:23:48.186 "sha256", 00:23:48.186 "sha384", 00:23:48.186 "sha512" 00:23:48.186 ], 00:23:48.186 "dhchap_dhgroups": [ 00:23:48.186 "null", 00:23:48.186 "ffdhe2048", 00:23:48.186 "ffdhe3072", 00:23:48.186 "ffdhe4096", 00:23:48.186 "ffdhe6144", 00:23:48.186 "ffdhe8192" 00:23:48.186 ], 00:23:48.186 "rdma_umr_per_io": false 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "bdev_nvme_attach_controller", 00:23:48.186 "params": { 00:23:48.186 "name": "nvme0", 00:23:48.186 "trtype": "TCP", 00:23:48.186 "adrfam": "IPv4", 00:23:48.186 "traddr": "10.0.0.2", 00:23:48.186 "trsvcid": "4420", 00:23:48.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.186 "prchk_reftag": false, 00:23:48.186 "prchk_guard": false, 00:23:48.186 "ctrlr_loss_timeout_sec": 0, 00:23:48.186 "reconnect_delay_sec": 0, 00:23:48.186 "fast_io_fail_timeout_sec": 0, 00:23:48.186 "psk": "key0", 00:23:48.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.186 "hdgst": false, 00:23:48.186 "ddgst": false, 00:23:48.186 "multipath": "multipath" 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "bdev_nvme_set_hotplug", 00:23:48.186 "params": { 00:23:48.186 "period_us": 100000, 00:23:48.186 "enable": false 00:23:48.186 } 00:23:48.186 }, 00:23:48.186 { 00:23:48.186 "method": "bdev_enable_histogram", 00:23:48.186 "params": { 00:23:48.186 "name": "nvme0n1", 00:23:48.186 "enable": true 00:23:48.186 } 00:23:48.186 }, 00:23:48.187 { 00:23:48.187 "method": "bdev_wait_for_examine" 00:23:48.187 } 00:23:48.187 ] 00:23:48.187 }, 00:23:48.187 { 00:23:48.187 "subsystem": "nbd", 00:23:48.187 "config": [] 00:23:48.187 } 00:23:48.187 ] 00:23:48.187 }' 00:23:48.187 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.187 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.187 [2024-12-13 12:29:15.772804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:48.187 [2024-12-13 12:29:15.772851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358053 ] 00:23:48.187 [2024-12-13 12:29:15.843557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.187 [2024-12-13 12:29:15.865301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.446 [2024-12-13 12:29:16.013761] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.013 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.013 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.013 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:49.013 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:49.273 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.273 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.273 Running I/O for 1 seconds... 00:23:50.210 4792.00 IOPS, 18.72 MiB/s 00:23:50.210 Latency(us) 00:23:50.210 [2024-12-13T11:29:17.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.210 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:50.210 Verification LBA range: start 0x0 length 0x2000 00:23:50.210 nvme0n1 : 1.02 4846.64 18.93 0.00 0.00 26225.23 6303.94 30708.30 00:23:50.210 [2024-12-13T11:29:17.910Z] =================================================================================================================== 00:23:50.210 [2024-12-13T11:29:17.910Z] Total : 4846.64 18.93 0.00 0.00 26225.23 6303.94 30708.30 00:23:50.210 { 00:23:50.210 "results": [ 00:23:50.210 { 00:23:50.210 "job": "nvme0n1", 00:23:50.210 "core_mask": "0x2", 00:23:50.210 "workload": "verify", 00:23:50.210 "status": "finished", 00:23:50.210 "verify_range": { 00:23:50.210 "start": 0, 00:23:50.210 "length": 8192 00:23:50.210 }, 00:23:50.210 "queue_depth": 128, 00:23:50.210 "io_size": 4096, 00:23:50.210 "runtime": 1.015342, 00:23:50.210 "iops": 4846.642806069285, 00:23:50.210 "mibps": 18.932198461208145, 00:23:50.210 "io_failed": 0, 00:23:50.210 "io_timeout": 0, 00:23:50.210 "avg_latency_us": 26225.234833802653, 00:23:50.210 "min_latency_us": 6303.939047619047, 00:23:50.210 "max_latency_us": 30708.297142857144 00:23:50.210 } 00:23:50.210 ], 00:23:50.210 "core_count": 1 00:23:50.210 } 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:50.469 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:50.469 nvmf_trace.0 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 358053 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358053 ']' 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358053 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358053 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358053' 00:23:50.469 killing process with pid 358053 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358053 00:23:50.469 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.469 00:23:50.469 Latency(us) 00:23:50.469 [2024-12-13T11:29:18.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.469 [2024-12-13T11:29:18.169Z] =================================================================================================================== 00:23:50.469 [2024-12-13T11:29:18.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.469 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358053 00:23:50.728 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:50.728 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.728 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.729 rmmod nvme_tcp 00:23:50.729 rmmod nvme_fabrics 00:23:50.729 rmmod nvme_keyring 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 357820 ']' 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 357820 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357820 ']' 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357820 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357820 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357820' 00:23:50.729 killing process with pid 357820 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357820 00:23:50.729 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357820 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.988 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.894 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:52.894 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NUrtG8RbgP /tmp/tmp.gYnfbRz2u8 /tmp/tmp.n1r6DmqXmb 00:23:52.894 00:23:52.894 real 1m18.781s 00:23:52.894 user 2m1.143s 00:23:52.894 sys 0m29.830s 00:23:52.894 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.894 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.894 ************************************ 00:23:52.894 END TEST nvmf_tls 00:23:52.894 ************************************ 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.154 ************************************ 00:23:53.154 START TEST nvmf_fips 00:23:53.154 ************************************ 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:53.154 * Looking for test storage... 00:23:53.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.154 --rc genhtml_branch_coverage=1 00:23:53.154 --rc genhtml_function_coverage=1 00:23:53.154 --rc genhtml_legend=1 00:23:53.154 --rc geninfo_all_blocks=1 00:23:53.154 --rc geninfo_unexecuted_blocks=1 00:23:53.154 00:23:53.154 ' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.154 --rc genhtml_branch_coverage=1 00:23:53.154 --rc genhtml_function_coverage=1 00:23:53.154 --rc genhtml_legend=1 00:23:53.154 --rc geninfo_all_blocks=1 00:23:53.154 --rc geninfo_unexecuted_blocks=1 00:23:53.154 00:23:53.154 ' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.154 --rc genhtml_branch_coverage=1 00:23:53.154 --rc genhtml_function_coverage=1 00:23:53.154 --rc genhtml_legend=1 00:23:53.154 --rc geninfo_all_blocks=1 00:23:53.154 --rc geninfo_unexecuted_blocks=1 00:23:53.154 00:23:53.154 ' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.154 --rc genhtml_branch_coverage=1 00:23:53.154 --rc genhtml_function_coverage=1 00:23:53.154 --rc genhtml_legend=1 00:23:53.154 --rc geninfo_all_blocks=1 00:23:53.154 --rc geninfo_unexecuted_blocks=1 00:23:53.154 00:23:53.154 ' 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.154 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:53.155 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:53.414 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:53.414 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:53.414 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.414 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.414 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.414 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:53.415 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:53.415 Error setting digest 00:23:53.415 40C27F7C417F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:53.415 40C27F7C417F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.415 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.996 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.996 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.996 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.996 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.997 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:23:59.997 00:23:59.997 --- 10.0.0.2 ping statistics --- 00:23:59.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.997 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:23:59.997 00:23:59.997 --- 10.0.0.1 ping statistics --- 00:23:59.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.997 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=362002 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 362002 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362002 ']' 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.997 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.997 [2024-12-13 12:29:27.015651] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:59.997 [2024-12-13 12:29:27.015697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.997 [2024-12-13 12:29:27.093659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.997 [2024-12-13 12:29:27.113826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.997 [2024-12-13 12:29:27.113861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.997 [2024-12-13 12:29:27.113870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.997 [2024-12-13 12:29:27.113876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.997 [2024-12-13 12:29:27.113881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.997 [2024-12-13 12:29:27.114330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.e3Q 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.e3Q 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.e3Q 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.e3Q 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:59.997 [2024-12-13 12:29:27.424887] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.997 [2024-12-13 12:29:27.440899] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.997 [2024-12-13 12:29:27.441090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.997 malloc0 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=362033 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 362033 /var/tmp/bdevperf.sock 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362033 ']' 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.997 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.998 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.998 [2024-12-13 12:29:27.567696] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:59.998 [2024-12-13 12:29:27.567748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362033 ] 00:23:59.998 [2024-12-13 12:29:27.642010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.998 [2024-12-13 12:29:27.663920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.257 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.257 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:00.257 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.e3Q 00:24:00.257 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.516 [2024-12-13 12:29:28.135389] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.516 TLSTESTn1 00:24:00.775 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.775 Running I/O for 10 seconds... 00:24:02.649 5262.00 IOPS, 20.55 MiB/s [2024-12-13T11:29:31.726Z] 5373.00 IOPS, 20.99 MiB/s [2024-12-13T11:29:32.664Z] 5432.33 IOPS, 21.22 MiB/s [2024-12-13T11:29:33.602Z] 5306.00 IOPS, 20.73 MiB/s [2024-12-13T11:29:34.539Z] 5307.00 IOPS, 20.73 MiB/s [2024-12-13T11:29:35.476Z] 5311.17 IOPS, 20.75 MiB/s [2024-12-13T11:29:36.412Z] 5264.14 IOPS, 20.56 MiB/s [2024-12-13T11:29:37.348Z] 5245.38 IOPS, 20.49 MiB/s [2024-12-13T11:29:38.727Z] 5213.22 IOPS, 20.36 MiB/s [2024-12-13T11:29:38.727Z] 5190.50 IOPS, 20.28 MiB/s 00:24:11.027 Latency(us) 00:24:11.027 [2024-12-13T11:29:38.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.027 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.027 Verification LBA range: start 0x0 length 0x2000 00:24:11.027 TLSTESTn1 : 10.02 5194.21 20.29 0.00 0.00 24606.14 5929.45 51430.16 00:24:11.027 [2024-12-13T11:29:38.727Z] =================================================================================================================== 00:24:11.027 [2024-12-13T11:29:38.727Z] Total : 5194.21 20.29 0.00 0.00 24606.14 5929.45 51430.16 00:24:11.027 { 00:24:11.027 "results": [ 00:24:11.027 { 00:24:11.027 "job": "TLSTESTn1", 00:24:11.027 "core_mask": "0x4", 00:24:11.027 "workload": "verify", 00:24:11.027 "status": "finished", 00:24:11.027 "verify_range": { 00:24:11.027 "start": 0, 00:24:11.027 "length": 8192 00:24:11.027 }, 00:24:11.027 "queue_depth": 128, 00:24:11.027 "io_size": 4096, 00:24:11.027 "runtime": 10.017504, 00:24:11.027 "iops": 5194.208058214901, 00:24:11.027 "mibps": 20.289875227401957, 00:24:11.027 "io_failed": 0, 00:24:11.027 "io_timeout": 0, 00:24:11.027 "avg_latency_us": 24606.138236448845, 00:24:11.027 "min_latency_us": 5929.447619047619, 00:24:11.027 "max_latency_us": 51430.15619047619 00:24:11.027 } 00:24:11.027 ], 00:24:11.027 "core_count": 1 00:24:11.027 } 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:11.027 nvmf_trace.0 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 362033 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362033 ']' 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362033 00:24:11.027 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362033 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362033' 00:24:11.028 killing process with pid 362033 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362033 00:24:11.028 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.028 00:24:11.028 Latency(us) 00:24:11.028 [2024-12-13T11:29:38.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.028 [2024-12-13T11:29:38.728Z] =================================================================================================================== 00:24:11.028 [2024-12-13T11:29:38.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362033 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.028 rmmod nvme_tcp 00:24:11.028 rmmod nvme_fabrics 00:24:11.028 rmmod nvme_keyring 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 362002 ']' 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 362002 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362002 ']' 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362002 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.028 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362002 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362002' 00:24:11.287 killing process with pid 362002 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362002 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362002 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.287 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.e3Q 00:24:13.824 00:24:13.824 real 0m20.380s 00:24:13.824 user 0m21.310s 00:24:13.824 sys 0m9.486s 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.824 ************************************ 00:24:13.824 END TEST nvmf_fips 00:24:13.824 ************************************ 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:13.824 ************************************ 00:24:13.824 START TEST nvmf_control_msg_list 00:24:13.824 ************************************ 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:13.824 * Looking for test storage... 00:24:13.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:13.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.824 --rc genhtml_branch_coverage=1 00:24:13.824 --rc genhtml_function_coverage=1 00:24:13.824 --rc genhtml_legend=1 00:24:13.824 --rc geninfo_all_blocks=1 00:24:13.824 --rc geninfo_unexecuted_blocks=1 00:24:13.824 00:24:13.824 ' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:13.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.824 --rc genhtml_branch_coverage=1 00:24:13.824 --rc genhtml_function_coverage=1 00:24:13.824 --rc genhtml_legend=1 00:24:13.824 --rc geninfo_all_blocks=1 00:24:13.824 --rc geninfo_unexecuted_blocks=1 00:24:13.824 00:24:13.824 ' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:13.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.824 --rc genhtml_branch_coverage=1 00:24:13.824 --rc genhtml_function_coverage=1 00:24:13.824 --rc genhtml_legend=1 00:24:13.824 --rc geninfo_all_blocks=1 00:24:13.824 --rc geninfo_unexecuted_blocks=1 00:24:13.824 00:24:13.824 ' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:13.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.824 --rc genhtml_branch_coverage=1 00:24:13.824 --rc genhtml_function_coverage=1 00:24:13.824 --rc genhtml_legend=1 00:24:13.824 --rc geninfo_all_blocks=1 00:24:13.824 --rc geninfo_unexecuted_blocks=1 00:24:13.824 00:24:13.824 ' 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.824 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.825 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:20.395 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:20.395 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:20.395 Found net devices under 0000:af:00.0: cvl_0_0 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:20.395 Found net devices under 0000:af:00.1: cvl_0_1 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.395 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.395 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:24:20.396 00:24:20.396 --- 10.0.0.2 ping statistics --- 00:24:20.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.396 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:24:20.396 00:24:20.396 --- 10.0.0.1 ping statistics --- 00:24:20.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.396 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=367277 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 367277 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 367277 ']' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 [2024-12-13 12:29:47.224626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:20.396 [2024-12-13 12:29:47.224674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.396 [2024-12-13 12:29:47.302950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.396 [2024-12-13 12:29:47.324170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.396 [2024-12-13 12:29:47.324206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.396 [2024-12-13 12:29:47.324213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.396 [2024-12-13 12:29:47.324219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.396 [2024-12-13 12:29:47.324224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.396 [2024-12-13 12:29:47.324731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 [2024-12-13 12:29:47.460050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 Malloc0 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 [2024-12-13 12:29:47.508327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=367298 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=367299 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=367300 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 367298 00:24:20.396 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.396 [2024-12-13 12:29:47.603164] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:20.396 [2024-12-13 12:29:47.603347] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:20.396 [2024-12-13 12:29:47.603507] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.333 Initializing NVMe Controllers 00:24:21.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:21.333 Initialization complete. Launching workers. 00:24:21.333 ======================================================== 00:24:21.333 Latency(us) 00:24:21.333 Device Information : IOPS MiB/s Average min max 00:24:21.333 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5290.00 20.66 188.67 121.90 607.31 00:24:21.333 ======================================================== 00:24:21.333 Total : 5290.00 20.66 188.67 121.90 607.31 00:24:21.333 00:24:21.333 Initializing NVMe Controllers 00:24:21.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:21.333 Initialization complete. Launching workers. 00:24:21.333 ======================================================== 00:24:21.333 Latency(us) 00:24:21.333 Device Information : IOPS MiB/s Average min max 00:24:21.333 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5229.97 20.43 190.85 121.76 379.30 00:24:21.333 ======================================================== 00:24:21.333 Total : 5229.97 20.43 190.85 121.76 379.30 00:24:21.333 00:24:21.333 Initializing NVMe Controllers 00:24:21.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:21.333 Initialization complete. Launching workers. 00:24:21.333 ======================================================== 00:24:21.333 Latency(us) 00:24:21.333 Device Information : IOPS MiB/s Average min max 00:24:21.333 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 5195.99 20.30 192.10 121.72 388.06 00:24:21.333 ======================================================== 00:24:21.333 Total : 5195.99 20.30 192.10 121.72 388.06 00:24:21.333 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 367299 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 367300 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.333 rmmod nvme_tcp 00:24:21.333 rmmod nvme_fabrics 00:24:21.333 rmmod nvme_keyring 00:24:21.333 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 367277 ']' 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 367277 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 367277 ']' 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 367277 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367277 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367277' 00:24:21.334 killing process with pid 367277 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 367277 00:24:21.334 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 367277 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.593 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:24.131 00:24:24.131 real 0m10.141s 00:24:24.131 user 0m6.819s 00:24:24.131 sys 0m5.466s 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:24.131 ************************************ 00:24:24.131 END TEST nvmf_control_msg_list 00:24:24.131 ************************************ 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:24.131 ************************************ 00:24:24.131 START TEST nvmf_wait_for_buf 00:24:24.131 ************************************ 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:24.131 * Looking for test storage... 00:24:24.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.131 --rc genhtml_branch_coverage=1 00:24:24.131 --rc genhtml_function_coverage=1 00:24:24.131 --rc genhtml_legend=1 00:24:24.131 --rc geninfo_all_blocks=1 00:24:24.131 --rc geninfo_unexecuted_blocks=1 00:24:24.131 00:24:24.131 ' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.131 --rc genhtml_branch_coverage=1 00:24:24.131 --rc genhtml_function_coverage=1 00:24:24.131 --rc genhtml_legend=1 00:24:24.131 --rc geninfo_all_blocks=1 00:24:24.131 --rc geninfo_unexecuted_blocks=1 00:24:24.131 00:24:24.131 ' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.131 --rc genhtml_branch_coverage=1 00:24:24.131 --rc genhtml_function_coverage=1 00:24:24.131 --rc genhtml_legend=1 00:24:24.131 --rc geninfo_all_blocks=1 00:24:24.131 --rc geninfo_unexecuted_blocks=1 00:24:24.131 00:24:24.131 ' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:24.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.131 --rc genhtml_branch_coverage=1 00:24:24.131 --rc genhtml_function_coverage=1 00:24:24.131 --rc genhtml_legend=1 00:24:24.131 --rc geninfo_all_blocks=1 00:24:24.131 --rc geninfo_unexecuted_blocks=1 00:24:24.131 00:24:24.131 ' 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.131 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.132 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.701 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:30.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:30.702 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:30.702 Found net devices under 0000:af:00.0: cvl_0_0 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:30.702 Found net devices under 0000:af:00.1: cvl_0_1 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:24:30.702 00:24:30.702 --- 10.0.0.2 ping statistics --- 00:24:30.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.702 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:30.702 00:24:30.702 --- 10.0.0.1 ping statistics --- 00:24:30.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.702 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=370992 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 370992 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 370992 ']' 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.702 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.702 [2024-12-13 12:29:57.465408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:30.702 [2024-12-13 12:29:57.465458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.703 [2024-12-13 12:29:57.544741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.703 [2024-12-13 12:29:57.566660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.703 [2024-12-13 12:29:57.566694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.703 [2024-12-13 12:29:57.566702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.703 [2024-12-13 12:29:57.566708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.703 [2024-12-13 12:29:57.566712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.703 [2024-12-13 12:29:57.567203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 Malloc0 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 [2024-12-13 12:29:57.748292] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:30.703 [2024-12-13 12:29:57.776557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.703 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.703 [2024-12-13 12:29:57.862858] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.082 Initializing NVMe Controllers 00:24:32.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:32.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:32.082 Initialization complete. Launching workers. 00:24:32.082 ======================================================== 00:24:32.082 Latency(us) 00:24:32.082 Device Information : IOPS MiB/s Average min max 00:24:32.082 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33539.53 29914.80 71081.64 00:24:32.082 ======================================================== 00:24:32.082 Total : 124.00 15.50 33539.53 29914.80 71081.64 00:24:32.082 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.082 rmmod nvme_tcp 00:24:32.082 rmmod nvme_fabrics 00:24:32.082 rmmod nvme_keyring 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 370992 ']' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 370992 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 370992 ']' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 370992 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370992 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370992' 00:24:32.082 killing process with pid 370992 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 370992 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 370992 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.082 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.083 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.083 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.619 00:24:34.619 real 0m10.476s 00:24:34.619 user 0m4.032s 00:24:34.619 sys 0m4.874s 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.619 ************************************ 00:24:34.619 END TEST nvmf_wait_for_buf 00:24:34.619 ************************************ 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:34.619 ************************************ 00:24:34.619 START TEST nvmf_fuzz 00:24:34.619 ************************************ 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:34.619 * Looking for test storage... 00:24:34.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:34.619 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.619 --rc genhtml_branch_coverage=1 00:24:34.619 --rc genhtml_function_coverage=1 00:24:34.619 --rc genhtml_legend=1 00:24:34.619 --rc geninfo_all_blocks=1 00:24:34.619 --rc geninfo_unexecuted_blocks=1 00:24:34.619 00:24:34.619 ' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.619 --rc genhtml_branch_coverage=1 00:24:34.619 --rc genhtml_function_coverage=1 00:24:34.619 --rc genhtml_legend=1 00:24:34.619 --rc geninfo_all_blocks=1 00:24:34.619 --rc geninfo_unexecuted_blocks=1 00:24:34.619 00:24:34.619 ' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.619 --rc genhtml_branch_coverage=1 00:24:34.619 --rc genhtml_function_coverage=1 00:24:34.619 --rc genhtml_legend=1 00:24:34.619 --rc geninfo_all_blocks=1 00:24:34.619 --rc geninfo_unexecuted_blocks=1 00:24:34.619 00:24:34.619 ' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:34.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.619 --rc genhtml_branch_coverage=1 00:24:34.619 --rc genhtml_function_coverage=1 00:24:34.619 --rc genhtml_legend=1 00:24:34.619 --rc geninfo_all_blocks=1 00:24:34.619 --rc geninfo_unexecuted_blocks=1 00:24:34.619 00:24:34.619 ' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.619 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:34.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.620 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.191 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:41.192 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:41.192 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:41.192 Found net devices under 0000:af:00.0: cvl_0_0 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:41.192 Found net devices under 0000:af:00.1: cvl_0_1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:24:41.192 00:24:41.192 --- 10.0.0.2 ping statistics --- 00:24:41.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.192 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:24:41.192 00:24:41.192 --- 10.0.0.1 ping statistics --- 00:24:41.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.192 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=375030 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 375030 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 375030 ']' 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.192 12:30:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.192 Malloc0 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.192 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:41.193 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:13.274 Fuzzing completed. Shutting down the fuzz application 00:25:13.274 00:25:13.274 Dumping successful admin opcodes: 00:25:13.274 9, 10, 00:25:13.274 Dumping successful io opcodes: 00:25:13.274 0, 9, 00:25:13.274 NS: 0x2000008eff00 I/O qp, Total commands completed: 908567, total successful commands: 5292, random_seed: 1647276544 00:25:13.274 NS: 0x2000008eff00 admin qp, Total commands completed: 96672, total successful commands: 22, random_seed: 3232382592 00:25:13.274 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:13.274 Fuzzing completed. Shutting down the fuzz application 00:25:13.274 00:25:13.274 Dumping successful admin opcodes: 00:25:13.274 00:25:13.274 Dumping successful io opcodes: 00:25:13.274 00:25:13.274 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2635908440 00:25:13.274 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 2635973132 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.274 rmmod nvme_tcp 00:25:13.274 rmmod nvme_fabrics 00:25:13.274 rmmod nvme_keyring 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 375030 ']' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 375030 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 375030 ']' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 375030 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375030 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375030' 00:25:13.274 killing process with pid 375030 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 375030 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 375030 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.274 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:14.654 00:25:14.654 real 0m40.243s 00:25:14.654 user 0m52.251s 00:25:14.654 sys 0m17.093s 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.654 ************************************ 00:25:14.654 END TEST nvmf_fuzz 00:25:14.654 ************************************ 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:14.654 ************************************ 00:25:14.654 START TEST nvmf_multiconnection 00:25:14.654 ************************************ 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:14.654 * Looking for test storage... 00:25:14.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.654 --rc genhtml_branch_coverage=1 00:25:14.654 --rc genhtml_function_coverage=1 00:25:14.654 --rc genhtml_legend=1 00:25:14.654 --rc geninfo_all_blocks=1 00:25:14.654 --rc geninfo_unexecuted_blocks=1 00:25:14.654 00:25:14.654 ' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.654 --rc genhtml_branch_coverage=1 00:25:14.654 --rc genhtml_function_coverage=1 00:25:14.654 --rc genhtml_legend=1 00:25:14.654 --rc geninfo_all_blocks=1 00:25:14.654 --rc geninfo_unexecuted_blocks=1 00:25:14.654 00:25:14.654 ' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.654 --rc genhtml_branch_coverage=1 00:25:14.654 --rc genhtml_function_coverage=1 00:25:14.654 --rc genhtml_legend=1 00:25:14.654 --rc geninfo_all_blocks=1 00:25:14.654 --rc geninfo_unexecuted_blocks=1 00:25:14.654 00:25:14.654 ' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.654 --rc genhtml_branch_coverage=1 00:25:14.654 --rc genhtml_function_coverage=1 00:25:14.654 --rc genhtml_legend=1 00:25:14.654 --rc geninfo_all_blocks=1 00:25:14.654 --rc geninfo_unexecuted_blocks=1 00:25:14.654 00:25:14.654 ' 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:14.654 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.655 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.655 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.655 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.655 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.655 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.914 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.487 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:21.488 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:21.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:21.488 Found net devices under 0000:af:00.0: cvl_0_0 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:21.488 Found net devices under 0000:af:00.1: cvl_0_1 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.488 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:25:21.488 00:25:21.488 --- 10.0.0.2 ping statistics --- 00:25:21.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.488 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:25:21.488 00:25:21.488 --- 10.0.0.1 ping statistics --- 00:25:21.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.488 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=383863 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 383863 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 383863 ']' 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.488 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 [2024-12-13 12:30:48.303414] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:21.489 [2024-12-13 12:30:48.303458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.489 [2024-12-13 12:30:48.384176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.489 [2024-12-13 12:30:48.408519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.489 [2024-12-13 12:30:48.408555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.489 [2024-12-13 12:30:48.408563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.489 [2024-12-13 12:30:48.408569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.489 [2024-12-13 12:30:48.408574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.489 [2024-12-13 12:30:48.410079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.489 [2024-12-13 12:30:48.410189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.489 [2024-12-13 12:30:48.410297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.489 [2024-12-13 12:30:48.410299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 [2024-12-13 12:30:48.554141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 Malloc1 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 [2024-12-13 12:30:48.618263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 Malloc2 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 Malloc3 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 Malloc4 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.489 Malloc5 00:25:21.489 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 Malloc6 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 Malloc7 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 Malloc8 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 Malloc9 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 Malloc10 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.490 Malloc11 00:25:21.490 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.491 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:22.870 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:22.870 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:22.870 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.870 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:22.870 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.775 12:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:26.153 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:26.153 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.153 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.153 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.153 12:30:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.057 12:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:29.433 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:29.433 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:29.433 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.433 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:29.433 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.339 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:32.718 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:32.718 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.718 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.718 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.718 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.622 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.622 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.622 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:34.622 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.622 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.622 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.622 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.622 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:36.000 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:36.000 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:36.000 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.000 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:36.000 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.905 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:39.283 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:39.283 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:39.283 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.283 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:39.283 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.190 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:42.127 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:42.127 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:42.127 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.127 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:42.127 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.661 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:45.599 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:45.599 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.599 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.599 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.599 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.507 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:49.411 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:49.411 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:49.411 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.411 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:49.411 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.316 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:52.692 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:52.692 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:52.692 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.692 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:52.692 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.597 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.597 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.597 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:54.597 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.597 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.597 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.597 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.597 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:55.975 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:55.975 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.975 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.975 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.975 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.880 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:57.880 [global] 00:25:57.880 thread=1 00:25:57.880 invalidate=1 00:25:57.880 rw=read 00:25:57.880 time_based=1 00:25:57.880 runtime=10 00:25:57.880 ioengine=libaio 00:25:57.880 direct=1 00:25:57.880 bs=262144 00:25:57.880 iodepth=64 00:25:57.880 norandommap=1 00:25:57.880 numjobs=1 00:25:57.880 00:25:57.880 [job0] 00:25:57.880 filename=/dev/nvme0n1 00:25:58.138 [job1] 00:25:58.138 filename=/dev/nvme10n1 00:25:58.138 [job2] 00:25:58.138 filename=/dev/nvme1n1 00:25:58.138 [job3] 00:25:58.138 filename=/dev/nvme2n1 00:25:58.138 [job4] 00:25:58.138 filename=/dev/nvme3n1 00:25:58.138 [job5] 00:25:58.139 filename=/dev/nvme4n1 00:25:58.139 [job6] 00:25:58.139 filename=/dev/nvme5n1 00:25:58.139 [job7] 00:25:58.139 filename=/dev/nvme6n1 00:25:58.139 [job8] 00:25:58.139 filename=/dev/nvme7n1 00:25:58.139 [job9] 00:25:58.139 filename=/dev/nvme8n1 00:25:58.139 [job10] 00:25:58.139 filename=/dev/nvme9n1 00:25:58.139 Could not set queue depth (nvme0n1) 00:25:58.139 Could not set queue depth (nvme10n1) 00:25:58.139 Could not set queue depth (nvme1n1) 00:25:58.139 Could not set queue depth (nvme2n1) 00:25:58.139 Could not set queue depth (nvme3n1) 00:25:58.139 Could not set queue depth (nvme4n1) 00:25:58.139 Could not set queue depth (nvme5n1) 00:25:58.139 Could not set queue depth (nvme6n1) 00:25:58.139 Could not set queue depth (nvme7n1) 00:25:58.139 Could not set queue depth (nvme8n1) 00:25:58.139 Could not set queue depth (nvme9n1) 00:25:58.398 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.398 fio-3.35 00:25:58.398 Starting 11 threads 00:26:10.613 00:26:10.613 job0: (groupid=0, jobs=1): err= 0: pid=390278: Fri Dec 13 12:31:36 2024 00:26:10.613 read: IOPS=327, BW=82.0MiB/s (86.0MB/s)(831MiB/10140msec) 00:26:10.613 slat (usec): min=14, max=333372, avg=1760.57, stdev=11758.92 00:26:10.613 clat (usec): min=566, max=895381, avg=193182.20, stdev=193809.32 00:26:10.613 lat (usec): min=600, max=895414, avg=194942.78, stdev=195529.98 00:26:10.613 clat percentiles (usec): 00:26:10.613 | 1.00th=[ 701], 5.00th=[ 3589], 10.00th=[ 8356], 20.00th=[ 15139], 00:26:10.613 | 30.00th=[ 24249], 40.00th=[ 49021], 50.00th=[141558], 60.00th=[229639], 00:26:10.613 | 70.00th=[287310], 80.00th=[350225], 90.00th=[488637], 95.00th=[557843], 00:26:10.613 | 99.00th=[750781], 99.50th=[784335], 99.90th=[884999], 99.95th=[893387], 00:26:10.613 | 99.99th=[893387] 00:26:10.613 bw ( KiB/s): min=14336, max=239616, per=10.63%, avg=83448.90, stdev=61007.43, samples=20 00:26:10.613 iops : min= 56, max= 936, avg=325.90, stdev=238.30, samples=20 00:26:10.613 lat (usec) : 750=1.53%, 1000=1.11% 00:26:10.613 lat (msec) : 2=1.86%, 4=0.60%, 10=6.89%, 20=15.52%, 50=12.81% 00:26:10.613 lat (msec) : 100=4.90%, 250=18.74%, 500=27.43%, 750=7.85%, 1000=0.75% 00:26:10.613 cpu : usr=0.08%, sys=1.15%, ctx=1253, majf=0, minf=4097 00:26:10.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:10.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.613 issued rwts: total=3325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.613 job1: (groupid=0, jobs=1): err= 0: pid=390279: Fri Dec 13 12:31:36 2024 00:26:10.613 read: IOPS=308, BW=77.2MiB/s (80.9MB/s)(783MiB/10143msec) 00:26:10.613 slat (usec): min=8, max=327033, avg=2691.61, stdev=13082.29 00:26:10.613 clat (usec): min=1676, max=744391, avg=204393.97, stdev=170655.21 00:26:10.613 lat (usec): min=1701, max=744422, avg=207085.58, stdev=173277.87 00:26:10.613 clat percentiles (msec): 00:26:10.613 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 25], 20.00th=[ 38], 00:26:10.613 | 30.00th=[ 86], 40.00th=[ 129], 50.00th=[ 165], 60.00th=[ 199], 00:26:10.613 | 70.00th=[ 249], 80.00th=[ 380], 90.00th=[ 464], 95.00th=[ 531], 00:26:10.613 | 99.00th=[ 667], 99.50th=[ 701], 99.90th=[ 735], 99.95th=[ 743], 00:26:10.613 | 99.99th=[ 743] 00:26:10.613 bw ( KiB/s): min=22016, max=232448, per=10.00%, avg=78491.40, stdev=56830.53, samples=20 00:26:10.613 iops : min= 86, max= 908, avg=306.50, stdev=221.98, samples=20 00:26:10.613 lat (msec) : 2=0.06%, 4=0.45%, 10=2.97%, 20=4.95%, 50=17.53% 00:26:10.613 lat (msec) : 100=6.77%, 250=37.46%, 500=22.29%, 750=7.51% 00:26:10.613 cpu : usr=0.15%, sys=1.06%, ctx=672, majf=0, minf=4097 00:26:10.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:10.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.613 issued rwts: total=3131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.613 job2: (groupid=0, jobs=1): err= 0: pid=390280: Fri Dec 13 12:31:36 2024 00:26:10.613 read: IOPS=239, BW=60.0MiB/s (62.9MB/s)(607MiB/10114msec) 00:26:10.613 slat (usec): min=8, max=226727, avg=3466.51, stdev=16907.85 00:26:10.613 clat (usec): min=845, max=1032.9k, avg=262939.37, stdev=212889.14 00:26:10.613 lat (usec): min=1299, max=1032.9k, avg=266405.89, stdev=215847.19 00:26:10.613 clat percentiles (msec): 00:26:10.613 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 53], 00:26:10.613 | 30.00th=[ 74], 40.00th=[ 146], 50.00th=[ 243], 60.00th=[ 321], 00:26:10.613 | 70.00th=[ 388], 80.00th=[ 447], 90.00th=[ 550], 95.00th=[ 634], 00:26:10.613 | 99.00th=[ 869], 99.50th=[ 936], 99.90th=[ 1036], 99.95th=[ 1036], 00:26:10.613 | 99.99th=[ 1036] 00:26:10.613 bw ( KiB/s): min=11264, max=136704, per=7.71%, avg=60500.65, stdev=36473.83, samples=20 00:26:10.613 iops : min= 44, max= 534, avg=236.25, stdev=142.52, samples=20 00:26:10.613 lat (usec) : 1000=0.04% 00:26:10.613 lat (msec) : 2=0.25%, 4=0.49%, 10=2.80%, 20=3.46%, 50=11.45% 00:26:10.613 lat (msec) : 100=16.81%, 250=15.95%, 500=35.23%, 750=10.80%, 1000=2.51% 00:26:10.613 lat (msec) : 2000=0.21% 00:26:10.613 cpu : usr=0.08%, sys=0.88%, ctx=626, majf=0, minf=3722 00:26:10.613 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:10.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.613 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.613 job3: (groupid=0, jobs=1): err= 0: pid=390281: Fri Dec 13 12:31:36 2024 00:26:10.613 read: IOPS=281, BW=70.5MiB/s (73.9MB/s)(713MiB/10114msec) 00:26:10.613 slat (usec): min=14, max=381362, avg=3005.92, stdev=17787.03 00:26:10.613 clat (usec): min=1511, max=816872, avg=223721.80, stdev=186512.31 00:26:10.613 lat (msec): min=2, max=869, avg=226.73, stdev=188.55 00:26:10.613 clat percentiles (msec): 00:26:10.613 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 49], 00:26:10.613 | 30.00th=[ 85], 40.00th=[ 136], 50.00th=[ 182], 60.00th=[ 243], 00:26:10.613 | 70.00th=[ 317], 80.00th=[ 372], 90.00th=[ 447], 95.00th=[ 617], 00:26:10.613 | 99.00th=[ 802], 99.50th=[ 810], 99.90th=[ 818], 99.95th=[ 818], 00:26:10.613 | 99.99th=[ 818] 00:26:10.613 bw ( KiB/s): min=10730, max=186880, per=9.09%, avg=71352.15, stdev=52462.21, samples=20 00:26:10.613 iops : min= 41, max= 730, avg=278.60, stdev=204.94, samples=20 00:26:10.613 lat (msec) : 2=0.04%, 4=0.14%, 10=5.26%, 20=4.56%, 50=10.48% 00:26:10.613 lat (msec) : 100=14.48%, 250=25.91%, 500=31.31%, 750=5.93%, 1000=1.89% 00:26:10.613 cpu : usr=0.06%, sys=1.10%, ctx=725, majf=0, minf=4097 00:26:10.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:10.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.613 issued rwts: total=2852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.613 job4: (groupid=0, jobs=1): err= 0: pid=390283: Fri Dec 13 12:31:36 2024 00:26:10.613 read: IOPS=301, BW=75.3MiB/s (79.0MB/s)(764MiB/10134msec) 00:26:10.613 slat (usec): min=6, max=482346, avg=2248.91, stdev=14182.63 00:26:10.613 clat (usec): min=649, max=836957, avg=209876.35, stdev=178911.10 00:26:10.613 lat (usec): min=707, max=836984, avg=212125.27, stdev=180833.18 00:26:10.613 clat percentiles (msec): 00:26:10.613 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 22], 20.00th=[ 42], 00:26:10.613 | 30.00th=[ 65], 40.00th=[ 82], 50.00th=[ 165], 60.00th=[ 264], 00:26:10.613 | 70.00th=[ 326], 80.00th=[ 372], 90.00th=[ 439], 95.00th=[ 502], 00:26:10.613 | 99.00th=[ 743], 99.50th=[ 793], 99.90th=[ 835], 99.95th=[ 835], 00:26:10.613 | 99.99th=[ 835] 00:26:10.613 bw ( KiB/s): min=28160, max=223232, per=9.75%, avg=76554.55, stdev=60138.80, samples=20 00:26:10.613 iops : min= 110, max= 872, avg=298.90, stdev=234.91, samples=20 00:26:10.613 lat (usec) : 750=0.13% 00:26:10.613 lat (msec) : 2=0.13%, 4=4.78%, 10=2.98%, 20=1.24%, 50=14.57% 00:26:10.613 lat (msec) : 100=18.66%, 250=15.72%, 500=36.97%, 750=3.99%, 1000=0.82% 00:26:10.613 cpu : usr=0.08%, sys=0.91%, ctx=573, majf=0, minf=4097 00:26:10.613 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:10.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.613 issued rwts: total=3054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.613 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.613 job5: (groupid=0, jobs=1): err= 0: pid=390289: Fri Dec 13 12:31:36 2024 00:26:10.613 read: IOPS=189, BW=47.3MiB/s (49.6MB/s)(479MiB/10137msec) 00:26:10.613 slat (usec): min=8, max=228533, avg=3593.68, stdev=15813.95 00:26:10.613 clat (msec): min=13, max=948, avg=334.44, stdev=174.48 00:26:10.613 lat (msec): min=13, max=948, avg=338.03, stdev=176.96 00:26:10.613 clat percentiles (msec): 00:26:10.613 | 1.00th=[ 29], 5.00th=[ 62], 10.00th=[ 94], 20.00th=[ 171], 00:26:10.613 | 30.00th=[ 230], 40.00th=[ 279], 50.00th=[ 334], 60.00th=[ 384], 00:26:10.613 | 70.00th=[ 435], 80.00th=[ 489], 90.00th=[ 550], 95.00th=[ 609], 00:26:10.613 | 99.00th=[ 785], 99.50th=[ 810], 99.90th=[ 953], 99.95th=[ 953], 00:26:10.613 | 99.99th=[ 953] 00:26:10.613 bw ( KiB/s): min=11776, max=97792, per=6.04%, avg=47448.25, stdev=21534.08, samples=20 00:26:10.613 iops : min= 46, max= 382, avg=185.25, stdev=84.13, samples=20 00:26:10.613 lat (msec) : 20=0.37%, 50=3.18%, 100=7.04%, 250=23.94%, 500=47.57% 00:26:10.613 lat (msec) : 750=15.70%, 1000=2.19% 00:26:10.613 cpu : usr=0.09%, sys=0.66%, ctx=447, majf=0, minf=4097 00:26:10.613 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:10.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.613 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.614 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.614 job6: (groupid=0, jobs=1): err= 0: pid=390292: Fri Dec 13 12:31:36 2024 00:26:10.614 read: IOPS=348, BW=87.1MiB/s (91.3MB/s)(883MiB/10135msec) 00:26:10.614 slat (usec): min=13, max=169046, avg=1609.91, stdev=8800.26 00:26:10.614 clat (usec): min=1268, max=702032, avg=181874.72, stdev=167642.43 00:26:10.614 lat (usec): min=1654, max=702059, avg=183484.63, stdev=169034.31 00:26:10.614 clat percentiles (msec): 00:26:10.614 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 46], 00:26:10.614 | 30.00th=[ 65], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 165], 00:26:10.614 | 70.00th=[ 268], 80.00th=[ 347], 90.00th=[ 456], 95.00th=[ 514], 00:26:10.614 | 99.00th=[ 625], 99.50th=[ 651], 99.90th=[ 676], 99.95th=[ 701], 00:26:10.614 | 99.99th=[ 701] 00:26:10.614 bw ( KiB/s): min=23552, max=264192, per=11.30%, avg=88736.65, stdev=72825.29, samples=20 00:26:10.614 iops : min= 92, max= 1032, avg=346.55, stdev=284.52, samples=20 00:26:10.614 lat (msec) : 2=0.28%, 4=0.42%, 10=6.91%, 20=7.70%, 50=6.66% 00:26:10.614 lat (msec) : 100=29.71%, 250=15.49%, 500=26.48%, 750=6.34% 00:26:10.614 cpu : usr=0.11%, sys=1.17%, ctx=966, majf=0, minf=4097 00:26:10.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:10.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.614 issued rwts: total=3531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.614 job7: (groupid=0, jobs=1): err= 0: pid=390293: Fri Dec 13 12:31:36 2024 00:26:10.614 read: IOPS=269, BW=67.4MiB/s (70.7MB/s)(679MiB/10067msec) 00:26:10.614 slat (usec): min=7, max=310794, avg=2103.18, stdev=12754.32 00:26:10.614 clat (usec): min=615, max=1027.2k, avg=235027.50, stdev=169283.83 00:26:10.614 lat (usec): min=656, max=1027.2k, avg=237130.68, stdev=171125.37 00:26:10.614 clat percentiles (usec): 00:26:10.614 | 1.00th=[ 1631], 5.00th=[ 5997], 10.00th=[ 12649], 00:26:10.614 | 20.00th=[ 73925], 30.00th=[ 149947], 40.00th=[ 183501], 00:26:10.614 | 50.00th=[ 214959], 60.00th=[ 256902], 70.00th=[ 291505], 00:26:10.614 | 80.00th=[ 354419], 90.00th=[ 450888], 95.00th=[ 513803], 00:26:10.614 | 99.00th=[ 759170], 99.50th=[ 893387], 99.90th=[1027605], 00:26:10.614 | 99.95th=[1027605], 99.99th=[1027605] 00:26:10.614 bw ( KiB/s): min=16384, max=185996, per=8.64%, avg=67835.65, stdev=46215.08, samples=20 00:26:10.614 iops : min= 64, max= 726, avg=264.90, stdev=180.48, samples=20 00:26:10.614 lat (usec) : 750=0.37% 00:26:10.614 lat (msec) : 2=0.81%, 4=1.25%, 10=6.37%, 20=3.50%, 50=4.53% 00:26:10.614 lat (msec) : 100=4.83%, 250=35.92%, 500=36.22%, 750=5.16%, 1000=0.85% 00:26:10.614 lat (msec) : 2000=0.18% 00:26:10.614 cpu : usr=0.10%, sys=0.83%, ctx=717, majf=0, minf=4097 00:26:10.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:10.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.614 issued rwts: total=2714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.614 job8: (groupid=0, jobs=1): err= 0: pid=390294: Fri Dec 13 12:31:36 2024 00:26:10.614 read: IOPS=246, BW=61.7MiB/s (64.7MB/s)(624MiB/10111msec) 00:26:10.614 slat (usec): min=8, max=384477, avg=3401.02, stdev=17642.47 00:26:10.614 clat (usec): min=627, max=1108.3k, avg=255815.01, stdev=204340.83 00:26:10.614 lat (usec): min=652, max=1108.3k, avg=259216.03, stdev=206375.48 00:26:10.614 clat percentiles (usec): 00:26:10.614 | 1.00th=[ 824], 5.00th=[ 15270], 10.00th=[ 46924], 00:26:10.614 | 20.00th=[ 67634], 30.00th=[ 98042], 40.00th=[ 143655], 00:26:10.614 | 50.00th=[ 242222], 60.00th=[ 287310], 70.00th=[ 346031], 00:26:10.614 | 80.00th=[ 400557], 90.00th=[ 509608], 95.00th=[ 591397], 00:26:10.614 | 99.00th=[ 968885], 99.50th=[1098908], 99.90th=[1115685], 00:26:10.614 | 99.95th=[1115685], 99.99th=[1115685] 00:26:10.614 bw ( KiB/s): min=13312, max=179712, per=7.93%, avg=62226.50, stdev=45098.90, samples=20 00:26:10.614 iops : min= 52, max= 702, avg=242.95, stdev=176.28, samples=20 00:26:10.614 lat (usec) : 750=0.68%, 1000=1.80% 00:26:10.614 lat (msec) : 2=0.44%, 4=0.08%, 10=0.48%, 20=2.53%, 50=5.25% 00:26:10.614 lat (msec) : 100=18.81%, 250=21.25%, 500=37.65%, 750=8.06%, 1000=2.09% 00:26:10.614 lat (msec) : 2000=0.88% 00:26:10.614 cpu : usr=0.08%, sys=0.94%, ctx=478, majf=0, minf=4098 00:26:10.614 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:10.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.614 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.614 job9: (groupid=0, jobs=1): err= 0: pid=390295: Fri Dec 13 12:31:36 2024 00:26:10.614 read: IOPS=289, BW=72.4MiB/s (75.9MB/s)(729MiB/10068msec) 00:26:10.614 slat (usec): min=11, max=144491, avg=2851.15, stdev=11206.71 00:26:10.614 clat (usec): min=1320, max=877394, avg=217896.21, stdev=156968.37 00:26:10.614 lat (usec): min=1370, max=877465, avg=220747.36, stdev=159216.49 00:26:10.614 clat percentiles (msec): 00:26:10.614 | 1.00th=[ 3], 5.00th=[ 21], 10.00th=[ 54], 20.00th=[ 83], 00:26:10.614 | 30.00th=[ 101], 40.00th=[ 136], 50.00th=[ 190], 60.00th=[ 243], 00:26:10.614 | 70.00th=[ 288], 80.00th=[ 342], 90.00th=[ 426], 95.00th=[ 518], 00:26:10.614 | 99.00th=[ 701], 99.50th=[ 785], 99.90th=[ 852], 99.95th=[ 860], 00:26:10.614 | 99.99th=[ 877] 00:26:10.614 bw ( KiB/s): min=24064, max=179200, per=9.30%, avg=73016.70, stdev=47943.02, samples=20 00:26:10.614 iops : min= 94, max= 700, avg=285.15, stdev=187.33, samples=20 00:26:10.614 lat (msec) : 2=0.07%, 4=2.19%, 10=0.69%, 20=2.02%, 50=4.46% 00:26:10.614 lat (msec) : 100=20.30%, 250=31.58%, 500=32.72%, 750=5.04%, 1000=0.93% 00:26:10.614 cpu : usr=0.09%, sys=0.91%, ctx=597, majf=0, minf=4097 00:26:10.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:10.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.614 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.614 job10: (groupid=0, jobs=1): err= 0: pid=390296: Fri Dec 13 12:31:36 2024 00:26:10.614 read: IOPS=271, BW=67.9MiB/s (71.2MB/s)(687MiB/10111msec) 00:26:10.614 slat (usec): min=11, max=400489, avg=3109.40, stdev=16144.27 00:26:10.614 clat (usec): min=1971, max=862449, avg=232162.06, stdev=213411.73 00:26:10.614 lat (msec): min=2, max=862, avg=235.27, stdev=216.22 00:26:10.614 clat percentiles (msec): 00:26:10.614 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 31], 00:26:10.614 | 30.00th=[ 50], 40.00th=[ 62], 50.00th=[ 165], 60.00th=[ 313], 00:26:10.614 | 70.00th=[ 380], 80.00th=[ 430], 90.00th=[ 523], 95.00th=[ 625], 00:26:10.614 | 99.00th=[ 760], 99.50th=[ 776], 99.90th=[ 793], 99.95th=[ 860], 00:26:10.614 | 99.99th=[ 860] 00:26:10.614 bw ( KiB/s): min=20439, max=323584, per=8.75%, avg=68708.20, stdev=74514.08, samples=20 00:26:10.614 iops : min= 79, max= 1264, avg=268.30, stdev=291.12, samples=20 00:26:10.614 lat (msec) : 2=0.07%, 4=0.47%, 10=8.19%, 20=4.44%, 50=17.32% 00:26:10.614 lat (msec) : 100=16.27%, 250=9.02%, 500=32.93%, 750=9.93%, 1000=1.35% 00:26:10.614 cpu : usr=0.15%, sys=1.06%, ctx=696, majf=0, minf=4097 00:26:10.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:10.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.614 issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.614 00:26:10.614 Run status group 0 (all jobs): 00:26:10.614 READ: bw=767MiB/s (804MB/s), 47.3MiB/s-87.1MiB/s (49.6MB/s-91.3MB/s), io=7777MiB (8155MB), run=10067-10143msec 00:26:10.614 00:26:10.614 Disk stats (read/write): 00:26:10.614 nvme0n1: ios=6495/0, merge=0/0, ticks=1218120/0, in_queue=1218120, util=97.25% 00:26:10.614 nvme10n1: ios=6118/0, merge=0/0, ticks=1225294/0, in_queue=1225294, util=97.42% 00:26:10.614 nvme1n1: ios=4717/0, merge=0/0, ticks=1213520/0, in_queue=1213520, util=97.71% 00:26:10.614 nvme2n1: ios=5576/0, merge=0/0, ticks=1187929/0, in_queue=1187929, util=97.83% 00:26:10.614 nvme3n1: ios=5951/0, merge=0/0, ticks=1212585/0, in_queue=1212585, util=97.92% 00:26:10.614 nvme4n1: ios=3669/0, merge=0/0, ticks=1234794/0, in_queue=1234794, util=98.27% 00:26:10.614 nvme5n1: ios=6911/0, merge=0/0, ticks=1233201/0, in_queue=1233201, util=98.41% 00:26:10.614 nvme6n1: ios=5198/0, merge=0/0, ticks=1245667/0, in_queue=1245667, util=98.51% 00:26:10.614 nvme7n1: ios=4850/0, merge=0/0, ticks=1189328/0, in_queue=1189328, util=98.93% 00:26:10.614 nvme8n1: ios=5523/0, merge=0/0, ticks=1241421/0, in_queue=1241421, util=99.08% 00:26:10.614 nvme9n1: ios=5369/0, merge=0/0, ticks=1208370/0, in_queue=1208370, util=99.20% 00:26:10.614 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:10.614 [global] 00:26:10.614 thread=1 00:26:10.614 invalidate=1 00:26:10.614 rw=randwrite 00:26:10.614 time_based=1 00:26:10.614 runtime=10 00:26:10.614 ioengine=libaio 00:26:10.614 direct=1 00:26:10.614 bs=262144 00:26:10.614 iodepth=64 00:26:10.614 norandommap=1 00:26:10.614 numjobs=1 00:26:10.614 00:26:10.614 [job0] 00:26:10.614 filename=/dev/nvme0n1 00:26:10.614 [job1] 00:26:10.614 filename=/dev/nvme10n1 00:26:10.614 [job2] 00:26:10.614 filename=/dev/nvme1n1 00:26:10.614 [job3] 00:26:10.614 filename=/dev/nvme2n1 00:26:10.614 [job4] 00:26:10.614 filename=/dev/nvme3n1 00:26:10.614 [job5] 00:26:10.614 filename=/dev/nvme4n1 00:26:10.614 [job6] 00:26:10.614 filename=/dev/nvme5n1 00:26:10.614 [job7] 00:26:10.614 filename=/dev/nvme6n1 00:26:10.614 [job8] 00:26:10.614 filename=/dev/nvme7n1 00:26:10.614 [job9] 00:26:10.614 filename=/dev/nvme8n1 00:26:10.614 [job10] 00:26:10.614 filename=/dev/nvme9n1 00:26:10.614 Could not set queue depth (nvme0n1) 00:26:10.614 Could not set queue depth (nvme10n1) 00:26:10.614 Could not set queue depth (nvme1n1) 00:26:10.614 Could not set queue depth (nvme2n1) 00:26:10.614 Could not set queue depth (nvme3n1) 00:26:10.614 Could not set queue depth (nvme4n1) 00:26:10.614 Could not set queue depth (nvme5n1) 00:26:10.614 Could not set queue depth (nvme6n1) 00:26:10.614 Could not set queue depth (nvme7n1) 00:26:10.614 Could not set queue depth (nvme8n1) 00:26:10.614 Could not set queue depth (nvme9n1) 00:26:10.615 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.615 fio-3.35 00:26:10.615 Starting 11 threads 00:26:20.597 00:26:20.597 job0: (groupid=0, jobs=1): err= 0: pid=391316: Fri Dec 13 12:31:47 2024 00:26:20.597 write: IOPS=362, BW=90.5MiB/s (94.9MB/s)(914MiB/10092msec); 0 zone resets 00:26:20.597 slat (usec): min=23, max=86888, avg=2072.07, stdev=6014.81 00:26:20.597 clat (usec): min=917, max=499959, avg=174629.10, stdev=126546.69 00:26:20.597 lat (usec): min=978, max=500005, avg=176701.17, stdev=128361.58 00:26:20.597 clat percentiles (msec): 00:26:20.597 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 21], 20.00th=[ 79], 00:26:20.597 | 30.00th=[ 107], 40.00th=[ 118], 50.00th=[ 125], 60.00th=[ 174], 00:26:20.597 | 70.00th=[ 220], 80.00th=[ 305], 90.00th=[ 376], 95.00th=[ 418], 00:26:20.597 | 99.00th=[ 472], 99.50th=[ 485], 99.90th=[ 498], 99.95th=[ 502], 00:26:20.597 | 99.99th=[ 502] 00:26:20.597 bw ( KiB/s): min=34304, max=258560, per=8.47%, avg=91916.15, stdev=54730.43, samples=20 00:26:20.597 iops : min= 134, max= 1010, avg=359.00, stdev=213.76, samples=20 00:26:20.597 lat (usec) : 1000=0.11% 00:26:20.597 lat (msec) : 2=0.55%, 4=1.29%, 10=4.32%, 20=3.48%, 50=6.51% 00:26:20.597 lat (msec) : 100=10.62%, 250=46.09%, 500=27.04% 00:26:20.597 cpu : usr=0.82%, sys=1.09%, ctx=2024, majf=0, minf=1 00:26:20.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:20.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.597 issued rwts: total=0,3654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.597 job1: (groupid=0, jobs=1): err= 0: pid=391328: Fri Dec 13 12:31:47 2024 00:26:20.597 write: IOPS=229, BW=57.3MiB/s (60.1MB/s)(582MiB/10154msec); 0 zone resets 00:26:20.597 slat (usec): min=20, max=95407, avg=3836.88, stdev=8448.23 00:26:20.597 clat (msec): min=8, max=520, avg=275.19, stdev=119.07 00:26:20.597 lat (msec): min=8, max=520, avg=279.03, stdev=120.70 00:26:20.597 clat percentiles (msec): 00:26:20.597 | 1.00th=[ 20], 5.00th=[ 77], 10.00th=[ 111], 20.00th=[ 159], 00:26:20.597 | 30.00th=[ 203], 40.00th=[ 262], 50.00th=[ 288], 60.00th=[ 313], 00:26:20.597 | 70.00th=[ 355], 80.00th=[ 384], 90.00th=[ 430], 95.00th=[ 460], 00:26:20.597 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 523], 00:26:20.597 | 99.99th=[ 523] 00:26:20.597 bw ( KiB/s): min=32768, max=122880, per=5.34%, avg=57968.90, stdev=24017.99, samples=20 00:26:20.597 iops : min= 128, max= 480, avg=226.40, stdev=93.74, samples=20 00:26:20.597 lat (msec) : 10=0.26%, 20=0.86%, 50=2.32%, 100=4.30%, 250=30.28% 00:26:20.597 lat (msec) : 500=60.70%, 750=1.29% 00:26:20.597 cpu : usr=0.58%, sys=0.65%, ctx=806, majf=0, minf=1 00:26:20.597 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:20.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.597 issued rwts: total=0,2328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.597 job2: (groupid=0, jobs=1): err= 0: pid=391335: Fri Dec 13 12:31:47 2024 00:26:20.597 write: IOPS=466, BW=117MiB/s (122MB/s)(1180MiB/10113msec); 0 zone resets 00:26:20.597 slat (usec): min=21, max=194747, avg=1785.73, stdev=5581.61 00:26:20.597 clat (usec): min=1252, max=482345, avg=134999.73, stdev=83330.70 00:26:20.597 lat (usec): min=1319, max=634980, avg=136785.47, stdev=84435.10 00:26:20.597 clat percentiles (msec): 00:26:20.597 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 41], 20.00th=[ 47], 00:26:20.597 | 30.00th=[ 67], 40.00th=[ 114], 50.00th=[ 146], 60.00th=[ 165], 00:26:20.597 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 218], 95.00th=[ 271], 00:26:20.597 | 99.00th=[ 409], 99.50th=[ 430], 99.90th=[ 472], 99.95th=[ 481], 00:26:20.597 | 99.99th=[ 485] 00:26:20.597 bw ( KiB/s): min=35328, max=269824, per=10.98%, avg=119219.20, stdev=65981.70, samples=20 00:26:20.597 iops : min= 138, max= 1054, avg=465.70, stdev=257.74, samples=20 00:26:20.597 lat (msec) : 2=0.11%, 4=0.76%, 10=2.37%, 20=1.95%, 50=21.59% 00:26:20.598 lat (msec) : 100=7.14%, 250=59.28%, 500=6.80% 00:26:20.598 cpu : usr=1.05%, sys=1.20%, ctx=1983, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,4720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job3: (groupid=0, jobs=1): err= 0: pid=391336: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=429, BW=107MiB/s (112MB/s)(1084MiB/10107msec); 0 zone resets 00:26:20.598 slat (usec): min=30, max=92105, avg=2052.55, stdev=4504.25 00:26:20.598 clat (usec): min=1634, max=464124, avg=147030.32, stdev=68850.84 00:26:20.598 lat (usec): min=1697, max=464168, avg=149082.87, stdev=69559.46 00:26:20.598 clat percentiles (msec): 00:26:20.598 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 69], 20.00th=[ 103], 00:26:20.598 | 30.00th=[ 113], 40.00th=[ 122], 50.00th=[ 144], 60.00th=[ 161], 00:26:20.598 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 271], 00:26:20.598 | 99.00th=[ 355], 99.50th=[ 393], 99.90th=[ 451], 99.95th=[ 456], 00:26:20.598 | 99.99th=[ 464] 00:26:20.598 bw ( KiB/s): min=71680, max=176128, per=10.08%, avg=109401.00, stdev=29884.95, samples=20 00:26:20.598 iops : min= 280, max= 688, avg=427.30, stdev=116.72, samples=20 00:26:20.598 lat (msec) : 2=0.02%, 4=0.21%, 10=0.23%, 20=0.12%, 50=7.56% 00:26:20.598 lat (msec) : 100=11.16%, 250=73.55%, 500=7.15% 00:26:20.598 cpu : usr=1.18%, sys=1.28%, ctx=1534, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,4336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job4: (groupid=0, jobs=1): err= 0: pid=391338: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=523, BW=131MiB/s (137MB/s)(1321MiB/10089msec); 0 zone resets 00:26:20.598 slat (usec): min=19, max=102313, avg=1677.47, stdev=4101.25 00:26:20.598 clat (usec): min=713, max=441726, avg=120511.04, stdev=80804.62 00:26:20.598 lat (usec): min=754, max=441769, avg=122188.50, stdev=81720.00 00:26:20.598 clat percentiles (usec): 00:26:20.598 | 1.00th=[ 1909], 5.00th=[ 9634], 10.00th=[ 25560], 20.00th=[ 47973], 00:26:20.598 | 30.00th=[ 58983], 40.00th=[ 92799], 50.00th=[112722], 60.00th=[123208], 00:26:20.598 | 70.00th=[160433], 80.00th=[189793], 90.00th=[223347], 95.00th=[265290], 00:26:20.598 | 99.00th=[379585], 99.50th=[392168], 99.90th=[429917], 99.95th=[438305], 00:26:20.598 | 99.99th=[442500] 00:26:20.598 bw ( KiB/s): min=65536, max=321536, per=12.31%, avg=133632.00, stdev=71908.88, samples=20 00:26:20.598 iops : min= 256, max= 1256, avg=522.00, stdev=280.89, samples=20 00:26:20.598 lat (usec) : 750=0.09%, 1000=0.15% 00:26:20.598 lat (msec) : 2=0.78%, 4=0.85%, 10=3.27%, 20=4.07%, 50=13.10% 00:26:20.598 lat (msec) : 100=20.05%, 250=50.33%, 500=7.31% 00:26:20.598 cpu : usr=1.06%, sys=1.46%, ctx=2069, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,5283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job5: (groupid=0, jobs=1): err= 0: pid=391339: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=438, BW=110MiB/s (115MB/s)(1102MiB/10047msec); 0 zone resets 00:26:20.598 slat (usec): min=27, max=109968, avg=1461.81, stdev=5645.24 00:26:20.598 clat (usec): min=733, max=498999, avg=144071.90, stdev=141003.35 00:26:20.598 lat (usec): min=767, max=499059, avg=145533.71, stdev=142648.94 00:26:20.598 clat percentiles (usec): 00:26:20.598 | 1.00th=[ 1844], 5.00th=[ 4883], 10.00th=[ 11338], 20.00th=[ 21890], 00:26:20.598 | 30.00th=[ 30278], 40.00th=[ 50070], 50.00th=[ 67634], 60.00th=[129500], 00:26:20.598 | 70.00th=[252707], 80.00th=[304088], 90.00th=[354419], 95.00th=[404751], 00:26:20.598 | 99.00th=[467665], 99.50th=[480248], 99.90th=[497026], 99.95th=[497026], 00:26:20.598 | 99.99th=[497026] 00:26:20.598 bw ( KiB/s): min=35840, max=291328, per=10.25%, avg=111241.85, stdev=85585.89, samples=20 00:26:20.598 iops : min= 140, max= 1138, avg=434.50, stdev=334.33, samples=20 00:26:20.598 lat (usec) : 750=0.02%, 1000=0.27% 00:26:20.598 lat (msec) : 2=0.88%, 4=2.61%, 10=4.90%, 20=8.94%, 50=22.37% 00:26:20.598 lat (msec) : 100=17.06%, 250=12.45%, 500=30.49% 00:26:20.598 cpu : usr=0.94%, sys=1.44%, ctx=3171, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,4408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job6: (groupid=0, jobs=1): err= 0: pid=391340: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=262, BW=65.7MiB/s (68.9MB/s)(668MiB/10153msec); 0 zone resets 00:26:20.598 slat (usec): min=22, max=171371, avg=3047.36, stdev=8548.65 00:26:20.598 clat (msec): min=3, max=684, avg=240.15, stdev=132.89 00:26:20.598 lat (msec): min=3, max=684, avg=243.19, stdev=134.87 00:26:20.598 clat percentiles (msec): 00:26:20.598 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 89], 00:26:20.598 | 30.00th=[ 136], 40.00th=[ 226], 50.00th=[ 271], 60.00th=[ 296], 00:26:20.598 | 70.00th=[ 330], 80.00th=[ 359], 90.00th=[ 393], 95.00th=[ 439], 00:26:20.598 | 99.00th=[ 506], 99.50th=[ 535], 99.90th=[ 684], 99.95th=[ 684], 00:26:20.598 | 99.99th=[ 684] 00:26:20.598 bw ( KiB/s): min=36352, max=122880, per=6.15%, avg=66751.35, stdev=29373.67, samples=20 00:26:20.598 iops : min= 142, max= 480, avg=260.70, stdev=114.65, samples=20 00:26:20.598 lat (msec) : 4=0.07%, 10=0.19%, 20=1.09%, 50=10.94%, 100=10.94% 00:26:20.598 lat (msec) : 250=19.36%, 500=56.33%, 750=1.09% 00:26:20.598 cpu : usr=0.54%, sys=0.90%, ctx=1328, majf=0, minf=2 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,2670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job7: (groupid=0, jobs=1): err= 0: pid=391341: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=612, BW=153MiB/s (161MB/s)(1554MiB/10153msec); 0 zone resets 00:26:20.598 slat (usec): min=25, max=108516, avg=1432.62, stdev=4465.09 00:26:20.598 clat (usec): min=881, max=499248, avg=103040.16, stdev=114877.01 00:26:20.598 lat (usec): min=951, max=499319, avg=104472.78, stdev=116370.20 00:26:20.598 clat percentiles (msec): 00:26:20.598 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 00:26:20.598 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 50], 00:26:20.598 | 70.00th=[ 51], 80.00th=[ 188], 90.00th=[ 292], 95.00th=[ 393], 00:26:20.598 | 99.00th=[ 456], 99.50th=[ 472], 99.90th=[ 493], 99.95th=[ 498], 00:26:20.598 | 99.99th=[ 502] 00:26:20.598 bw ( KiB/s): min=36864, max=408064, per=14.51%, avg=157521.15, stdev=145311.89, samples=20 00:26:20.598 iops : min= 144, max= 1594, avg=615.30, stdev=567.64, samples=20 00:26:20.598 lat (usec) : 1000=0.02% 00:26:20.598 lat (msec) : 2=0.10%, 4=0.23%, 10=0.79%, 20=1.30%, 50=61.59% 00:26:20.598 lat (msec) : 100=11.97%, 250=7.98%, 500=16.04% 00:26:20.598 cpu : usr=1.67%, sys=1.54%, ctx=2048, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,6217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job8: (groupid=0, jobs=1): err= 0: pid=391348: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=281, BW=70.3MiB/s (73.8MB/s)(714MiB/10153msec); 0 zone resets 00:26:20.598 slat (usec): min=25, max=88965, avg=2783.63, stdev=7465.22 00:26:20.598 clat (msec): min=3, max=498, avg=224.50, stdev=127.18 00:26:20.598 lat (msec): min=5, max=498, avg=227.29, stdev=129.03 00:26:20.598 clat percentiles (msec): 00:26:20.598 | 1.00th=[ 11], 5.00th=[ 53], 10.00th=[ 73], 20.00th=[ 111], 00:26:20.598 | 30.00th=[ 124], 40.00th=[ 174], 50.00th=[ 192], 60.00th=[ 245], 00:26:20.598 | 70.00th=[ 313], 80.00th=[ 359], 90.00th=[ 405], 95.00th=[ 447], 00:26:20.598 | 99.00th=[ 489], 99.50th=[ 493], 99.90th=[ 498], 99.95th=[ 498], 00:26:20.598 | 99.99th=[ 498] 00:26:20.598 bw ( KiB/s): min=34816, max=179712, per=6.59%, avg=71500.80, stdev=39508.52, samples=20 00:26:20.598 iops : min= 136, max= 702, avg=279.30, stdev=154.33, samples=20 00:26:20.598 lat (msec) : 4=0.04%, 10=0.84%, 20=1.26%, 50=2.52%, 100=13.27% 00:26:20.598 lat (msec) : 250=42.67%, 500=39.41% 00:26:20.598 cpu : usr=0.58%, sys=0.96%, ctx=1296, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.598 issued rwts: total=0,2857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.598 job9: (groupid=0, jobs=1): err= 0: pid=391351: Fri Dec 13 12:31:47 2024 00:26:20.598 write: IOPS=426, BW=107MiB/s (112MB/s)(1077MiB/10108msec); 0 zone resets 00:26:20.598 slat (usec): min=27, max=134139, avg=2156.48, stdev=5206.14 00:26:20.598 clat (usec): min=964, max=461558, avg=147880.82, stdev=76260.18 00:26:20.598 lat (usec): min=1220, max=461618, avg=150037.29, stdev=77085.98 00:26:20.598 clat percentiles (msec): 00:26:20.598 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 69], 20.00th=[ 103], 00:26:20.598 | 30.00th=[ 113], 40.00th=[ 122], 50.00th=[ 144], 60.00th=[ 163], 00:26:20.598 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 222], 95.00th=[ 284], 00:26:20.598 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 439], 99.95th=[ 443], 00:26:20.598 | 99.99th=[ 464] 00:26:20.598 bw ( KiB/s): min=34885, max=184320, per=10.01%, avg=108701.05, stdev=38889.89, samples=20 00:26:20.598 iops : min= 136, max= 720, avg=424.60, stdev=151.94, samples=20 00:26:20.598 lat (usec) : 1000=0.02% 00:26:20.598 lat (msec) : 2=0.16%, 4=1.28%, 10=1.51%, 20=1.74%, 50=3.88% 00:26:20.598 lat (msec) : 100=10.68%, 250=73.06%, 500=7.68% 00:26:20.598 cpu : usr=0.94%, sys=1.23%, ctx=1499, majf=0, minf=1 00:26:20.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:20.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.599 issued rwts: total=0,4309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.599 job10: (groupid=0, jobs=1): err= 0: pid=391359: Fri Dec 13 12:31:47 2024 00:26:20.599 write: IOPS=224, BW=56.1MiB/s (58.8MB/s)(569MiB/10151msec); 0 zone resets 00:26:20.599 slat (usec): min=25, max=71150, avg=4203.94, stdev=8597.63 00:26:20.599 clat (msec): min=5, max=529, avg=280.98, stdev=116.26 00:26:20.599 lat (msec): min=5, max=529, avg=285.18, stdev=117.93 00:26:20.599 clat percentiles (msec): 00:26:20.599 | 1.00th=[ 27], 5.00th=[ 92], 10.00th=[ 111], 20.00th=[ 161], 00:26:20.599 | 30.00th=[ 228], 40.00th=[ 271], 50.00th=[ 296], 60.00th=[ 317], 00:26:20.599 | 70.00th=[ 347], 80.00th=[ 372], 90.00th=[ 422], 95.00th=[ 485], 00:26:20.599 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 531], 99.95th=[ 531], 00:26:20.599 | 99.99th=[ 531] 00:26:20.599 bw ( KiB/s): min=32768, max=122880, per=5.22%, avg=56652.80, stdev=25129.64, samples=20 00:26:20.599 iops : min= 128, max= 480, avg=221.30, stdev=98.16, samples=20 00:26:20.599 lat (msec) : 10=0.13%, 20=0.62%, 50=1.45%, 100=5.01%, 250=25.18% 00:26:20.599 lat (msec) : 500=64.98%, 750=2.64% 00:26:20.599 cpu : usr=0.61%, sys=0.52%, ctx=695, majf=0, minf=1 00:26:20.599 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:20.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.599 issued rwts: total=0,2276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.599 00:26:20.599 Run status group 0 (all jobs): 00:26:20.599 WRITE: bw=1060MiB/s (1112MB/s), 56.1MiB/s-153MiB/s (58.8MB/s-161MB/s), io=10.5GiB (11.3GB), run=10047-10154msec 00:26:20.599 00:26:20.599 Disk stats (read/write): 00:26:20.599 nvme0n1: ios=49/7065, merge=0/0, ticks=48/1212668, in_queue=1212716, util=94.65% 00:26:20.599 nvme10n1: ios=26/4627, merge=0/0, ticks=524/1234578, in_queue=1235102, util=96.18% 00:26:20.599 nvme1n1: ios=40/9219, merge=0/0, ticks=2385/1198144, in_queue=1200529, util=100.00% 00:26:20.599 nvme2n1: ios=43/8448, merge=0/0, ticks=2631/1203395, in_queue=1206026, util=100.00% 00:26:20.599 nvme3n1: ios=0/10335, merge=0/0, ticks=0/1206355, in_queue=1206355, util=95.93% 00:26:20.599 nvme4n1: ios=39/8421, merge=0/0, ticks=674/1219924, in_queue=1220598, util=100.00% 00:26:20.599 nvme5n1: ios=47/5315, merge=0/0, ticks=1470/1228408, in_queue=1229878, util=100.00% 00:26:20.599 nvme6n1: ios=0/12409, merge=0/0, ticks=0/1234022, in_queue=1234022, util=97.41% 00:26:20.599 nvme7n1: ios=35/5689, merge=0/0, ticks=560/1237078, in_queue=1237638, util=100.00% 00:26:20.599 nvme8n1: ios=34/8391, merge=0/0, ticks=505/1202236, in_queue=1202741, util=100.00% 00:26:20.599 nvme9n1: ios=35/4531, merge=0/0, ticks=1543/1231460, in_queue=1233003, util=100.00% 00:26:20.599 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:20.599 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:20.599 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.599 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:20.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.599 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:21.168 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:21.168 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.168 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.428 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:21.687 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.687 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:21.946 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.946 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.205 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.205 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.205 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:22.463 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.463 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.464 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:22.464 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.464 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.464 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.464 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.464 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:22.723 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.723 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:22.982 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.982 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:23.241 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:23.241 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.241 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:23.501 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:23.501 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:23.501 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.501 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.501 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.501 rmmod nvme_tcp 00:26:23.501 rmmod nvme_fabrics 00:26:23.501 rmmod nvme_keyring 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 383863 ']' 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 383863 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 383863 ']' 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 383863 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383863 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383863' 00:26:23.501 killing process with pid 383863 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 383863 00:26:23.501 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 383863 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.070 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.975 00:26:25.975 real 1m11.441s 00:26:25.975 user 4m19.284s 00:26:25.975 sys 0m16.631s 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.975 ************************************ 00:26:25.975 END TEST nvmf_multiconnection 00:26:25.975 ************************************ 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.975 12:31:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:26.236 ************************************ 00:26:26.236 START TEST nvmf_initiator_timeout 00:26:26.236 ************************************ 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:26.236 * Looking for test storage... 00:26:26.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.236 --rc genhtml_branch_coverage=1 00:26:26.236 --rc genhtml_function_coverage=1 00:26:26.236 --rc genhtml_legend=1 00:26:26.236 --rc geninfo_all_blocks=1 00:26:26.236 --rc geninfo_unexecuted_blocks=1 00:26:26.236 00:26:26.236 ' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.236 --rc genhtml_branch_coverage=1 00:26:26.236 --rc genhtml_function_coverage=1 00:26:26.236 --rc genhtml_legend=1 00:26:26.236 --rc geninfo_all_blocks=1 00:26:26.236 --rc geninfo_unexecuted_blocks=1 00:26:26.236 00:26:26.236 ' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.236 --rc genhtml_branch_coverage=1 00:26:26.236 --rc genhtml_function_coverage=1 00:26:26.236 --rc genhtml_legend=1 00:26:26.236 --rc geninfo_all_blocks=1 00:26:26.236 --rc geninfo_unexecuted_blocks=1 00:26:26.236 00:26:26.236 ' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.236 --rc genhtml_branch_coverage=1 00:26:26.236 --rc genhtml_function_coverage=1 00:26:26.236 --rc genhtml_legend=1 00:26:26.236 --rc geninfo_all_blocks=1 00:26:26.236 --rc geninfo_unexecuted_blocks=1 00:26:26.236 00:26:26.236 ' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:26.236 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:26.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.237 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:32.809 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:32.809 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:32.809 Found net devices under 0000:af:00.0: cvl_0_0 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:32.809 Found net devices under 0000:af:00.1: cvl_0_1 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.809 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:26:32.810 00:26:32.810 --- 10.0.0.2 ping statistics --- 00:26:32.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.810 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:26:32.810 00:26:32.810 --- 10.0.0.1 ping statistics --- 00:26:32.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.810 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=396687 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 396687 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 396687 ']' 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.810 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 [2024-12-13 12:31:59.919080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:32.810 [2024-12-13 12:31:59.919123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.810 [2024-12-13 12:31:59.993160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.810 [2024-12-13 12:32:00.017450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.810 [2024-12-13 12:32:00.017484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.810 [2024-12-13 12:32:00.017493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.810 [2024-12-13 12:32:00.017498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.810 [2024-12-13 12:32:00.017506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.810 [2024-12-13 12:32:00.018880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.810 [2024-12-13 12:32:00.018998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.810 [2024-12-13 12:32:00.019110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.810 [2024-12-13 12:32:00.019112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 Malloc0 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 Delay0 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 [2024-12-13 12:32:00.201487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 [2024-12-13 12:32:00.234712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:33.747 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:33.747 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:33.747 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.747 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:33.747 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:35.651 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:35.651 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:35.652 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:35.932 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:35.932 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.932 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:35.932 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=397320 00:26:35.932 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:35.932 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:35.932 [global] 00:26:35.932 thread=1 00:26:35.932 invalidate=1 00:26:35.932 rw=write 00:26:35.932 time_based=1 00:26:35.932 runtime=60 00:26:35.932 ioengine=libaio 00:26:35.932 direct=1 00:26:35.932 bs=4096 00:26:35.932 iodepth=1 00:26:35.932 norandommap=0 00:26:35.932 numjobs=1 00:26:35.932 00:26:35.932 verify_dump=1 00:26:35.932 verify_backlog=512 00:26:35.932 verify_state_save=0 00:26:35.932 do_verify=1 00:26:35.932 verify=crc32c-intel 00:26:35.932 [job0] 00:26:35.932 filename=/dev/nvme0n1 00:26:35.932 Could not set queue depth (nvme0n1) 00:26:36.189 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:36.189 fio-3.35 00:26:36.189 Starting 1 thread 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.713 true 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.713 true 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.713 true 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.713 true 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.713 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.986 true 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.986 true 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.986 true 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.986 true 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:41.986 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 397320 00:27:38.211 00:27:38.211 job0: (groupid=0, jobs=1): err= 0: pid=397502: Fri Dec 13 12:33:03 2024 00:27:38.211 read: IOPS=110, BW=440KiB/s (451kB/s)(25.8MiB/60013msec) 00:27:38.211 slat (usec): min=3, max=12660, avg= 8.77, stdev=196.74 00:27:38.211 clat (usec): min=189, max=41399k, avg=8888.13, stdev=509566.20 00:27:38.211 lat (usec): min=193, max=41399k, avg=8896.90, stdev=509566.24 00:27:38.211 clat percentiles (usec): 00:27:38.211 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 223], 00:27:38.211 | 20.00th=[ 229], 30.00th=[ 235], 40.00th=[ 239], 00:27:38.211 | 50.00th=[ 243], 60.00th=[ 249], 70.00th=[ 253], 00:27:38.211 | 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 41157], 00:27:38.211 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:27:38.211 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:38.211 write: IOPS=110, BW=444KiB/s (454kB/s)(26.0MiB/60013msec); 0 zone resets 00:27:38.211 slat (usec): min=4, max=27682, avg= 9.64, stdev=339.25 00:27:38.211 clat (usec): min=138, max=385, avg=178.67, stdev=12.53 00:27:38.211 lat (usec): min=144, max=28053, avg=188.31, stdev=341.83 00:27:38.211 clat percentiles (usec): 00:27:38.211 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:27:38.211 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:27:38.211 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 198], 00:27:38.211 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 235], 99.95th=[ 281], 00:27:38.211 | 99.99th=[ 388] 00:27:38.211 bw ( KiB/s): min= 4096, max=10744, per=100.00%, avg=7606.86, stdev=2583.74, samples=7 00:27:38.211 iops : min= 1024, max= 2686, avg=1901.71, stdev=645.94, samples=7 00:27:38.211 lat (usec) : 250=81.89%, 500=15.17%, 750=0.02% 00:27:38.211 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=2.90%, >=2000=0.01% 00:27:38.211 cpu : usr=0.05%, sys=0.13%, ctx=13266, majf=0, minf=1 00:27:38.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.211 issued rwts: total=6602,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:38.211 00:27:38.211 Run status group 0 (all jobs): 00:27:38.211 READ: bw=440KiB/s (451kB/s), 440KiB/s-440KiB/s (451kB/s-451kB/s), io=25.8MiB (27.0MB), run=60013-60013msec 00:27:38.211 WRITE: bw=444KiB/s (454kB/s), 444KiB/s-444KiB/s (454kB/s-454kB/s), io=26.0MiB (27.3MB), run=60013-60013msec 00:27:38.211 00:27:38.211 Disk stats (read/write): 00:27:38.211 nvme0n1: ios=6649/6656, merge=0/0, ticks=17984/1177, in_queue=19161, util=99.84% 00:27:38.211 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:38.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:38.212 nvmf hotplug test: fio successful as expected 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.212 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.212 rmmod nvme_tcp 00:27:38.212 rmmod nvme_fabrics 00:27:38.212 rmmod nvme_keyring 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 396687 ']' 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 396687 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 396687 ']' 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 396687 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396687 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396687' 00:27:38.212 killing process with pid 396687 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 396687 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 396687 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.212 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.781 00:27:38.781 real 1m12.654s 00:27:38.781 user 4m22.116s 00:27:38.781 sys 0m6.562s 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.781 ************************************ 00:27:38.781 END TEST nvmf_initiator_timeout 00:27:38.781 ************************************ 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.781 12:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:45.354 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:45.354 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:45.354 Found net devices under 0000:af:00.0: cvl_0_0 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:45.354 Found net devices under 0000:af:00.1: cvl_0_1 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:45.354 ************************************ 00:27:45.354 START TEST nvmf_perf_adq 00:27:45.354 ************************************ 00:27:45.354 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:45.354 * Looking for test storage... 00:27:45.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:45.354 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:45.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.355 --rc genhtml_branch_coverage=1 00:27:45.355 --rc genhtml_function_coverage=1 00:27:45.355 --rc genhtml_legend=1 00:27:45.355 --rc geninfo_all_blocks=1 00:27:45.355 --rc geninfo_unexecuted_blocks=1 00:27:45.355 00:27:45.355 ' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:45.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.355 --rc genhtml_branch_coverage=1 00:27:45.355 --rc genhtml_function_coverage=1 00:27:45.355 --rc genhtml_legend=1 00:27:45.355 --rc geninfo_all_blocks=1 00:27:45.355 --rc geninfo_unexecuted_blocks=1 00:27:45.355 00:27:45.355 ' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:45.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.355 --rc genhtml_branch_coverage=1 00:27:45.355 --rc genhtml_function_coverage=1 00:27:45.355 --rc genhtml_legend=1 00:27:45.355 --rc geninfo_all_blocks=1 00:27:45.355 --rc geninfo_unexecuted_blocks=1 00:27:45.355 00:27:45.355 ' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:45.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.355 --rc genhtml_branch_coverage=1 00:27:45.355 --rc genhtml_function_coverage=1 00:27:45.355 --rc genhtml_legend=1 00:27:45.355 --rc geninfo_all_blocks=1 00:27:45.355 --rc geninfo_unexecuted_blocks=1 00:27:45.355 00:27:45.355 ' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.355 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:50.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:50.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:50.630 Found net devices under 0000:af:00.0: cvl_0_0 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:50.630 Found net devices under 0000:af:00.1: cvl_0_1 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:50.630 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:50.631 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:50.631 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:51.567 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:54.856 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:00.132 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:00.133 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:00.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:00.133 Found net devices under 0000:af:00.0: cvl_0_0 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:00.133 Found net devices under 0000:af:00.1: cvl_0_1 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.133 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.15 ms 00:28:00.134 00:28:00.134 --- 10.0.0.2 ping statistics --- 00:28:00.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.134 rtt min/avg/max/mdev = 1.153/1.153/1.153/0.000 ms 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:00.134 00:28:00.134 --- 10.0.0.1 ping statistics --- 00:28:00.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.134 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.134 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.393 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.393 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=415739 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 415739 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 415739 ']' 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.394 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.394 [2024-12-13 12:33:27.924679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:00.394 [2024-12-13 12:33:27.924725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.394 [2024-12-13 12:33:28.000057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.394 [2024-12-13 12:33:28.023523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.394 [2024-12-13 12:33:28.023559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.394 [2024-12-13 12:33:28.023565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.394 [2024-12-13 12:33:28.023572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.394 [2024-12-13 12:33:28.023578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.394 [2024-12-13 12:33:28.024887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.394 [2024-12-13 12:33:28.024997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.394 [2024-12-13 12:33:28.025077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.394 [2024-12-13 12:33:28.025078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.394 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.394 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:00.394 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:00.394 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:00.394 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 [2024-12-13 12:33:28.245037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 Malloc1 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.653 [2024-12-13 12:33:28.304827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=415775 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:00.653 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:03.188 "tick_rate": 2100000000, 00:28:03.188 "poll_groups": [ 00:28:03.188 { 00:28:03.188 "name": "nvmf_tgt_poll_group_000", 00:28:03.188 "admin_qpairs": 1, 00:28:03.188 "io_qpairs": 1, 00:28:03.188 "current_admin_qpairs": 1, 00:28:03.188 "current_io_qpairs": 1, 00:28:03.188 "pending_bdev_io": 0, 00:28:03.188 "completed_nvme_io": 19803, 00:28:03.188 "transports": [ 00:28:03.188 { 00:28:03.188 "trtype": "TCP" 00:28:03.188 } 00:28:03.188 ] 00:28:03.188 }, 00:28:03.188 { 00:28:03.188 "name": "nvmf_tgt_poll_group_001", 00:28:03.188 "admin_qpairs": 0, 00:28:03.188 "io_qpairs": 1, 00:28:03.188 "current_admin_qpairs": 0, 00:28:03.188 "current_io_qpairs": 1, 00:28:03.188 "pending_bdev_io": 0, 00:28:03.188 "completed_nvme_io": 19858, 00:28:03.188 "transports": [ 00:28:03.188 { 00:28:03.188 "trtype": "TCP" 00:28:03.188 } 00:28:03.188 ] 00:28:03.188 }, 00:28:03.188 { 00:28:03.188 "name": "nvmf_tgt_poll_group_002", 00:28:03.188 "admin_qpairs": 0, 00:28:03.188 "io_qpairs": 1, 00:28:03.188 "current_admin_qpairs": 0, 00:28:03.188 "current_io_qpairs": 1, 00:28:03.188 "pending_bdev_io": 0, 00:28:03.188 "completed_nvme_io": 20043, 00:28:03.188 "transports": [ 00:28:03.188 { 00:28:03.188 "trtype": "TCP" 00:28:03.188 } 00:28:03.188 ] 00:28:03.188 }, 00:28:03.188 { 00:28:03.188 "name": "nvmf_tgt_poll_group_003", 00:28:03.188 "admin_qpairs": 0, 00:28:03.188 "io_qpairs": 1, 00:28:03.188 "current_admin_qpairs": 0, 00:28:03.188 "current_io_qpairs": 1, 00:28:03.188 "pending_bdev_io": 0, 00:28:03.188 "completed_nvme_io": 20298, 00:28:03.188 "transports": [ 00:28:03.188 { 00:28:03.188 "trtype": "TCP" 00:28:03.188 } 00:28:03.188 ] 00:28:03.188 } 00:28:03.188 ] 00:28:03.188 }' 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:03.188 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 415775 00:28:11.310 Initializing NVMe Controllers 00:28:11.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:11.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:11.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:11.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:11.310 Initialization complete. Launching workers. 00:28:11.310 ======================================================== 00:28:11.310 Latency(us) 00:28:11.310 Device Information : IOPS MiB/s Average min max 00:28:11.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10690.90 41.76 5986.64 1773.37 10319.34 00:28:11.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10629.20 41.52 6021.75 2361.33 10456.66 00:28:11.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10735.70 41.94 5960.87 2143.44 12911.26 00:28:11.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10579.20 41.33 6050.55 2394.99 9916.86 00:28:11.310 ======================================================== 00:28:11.310 Total : 42635.00 166.54 6004.77 1773.37 12911.26 00:28:11.310 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.310 rmmod nvme_tcp 00:28:11.310 rmmod nvme_fabrics 00:28:11.310 rmmod nvme_keyring 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 415739 ']' 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 415739 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 415739 ']' 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 415739 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415739 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415739' 00:28:11.310 killing process with pid 415739 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 415739 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 415739 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.310 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.311 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.311 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.311 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.311 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.219 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.219 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:13.219 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:13.219 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:14.598 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:17.138 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:22.416 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:22.416 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:22.416 Found net devices under 0000:af:00.0: cvl_0_0 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:22.416 Found net devices under 0000:af:00.1: cvl_0_1 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.416 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:28:22.417 00:28:22.417 --- 10.0.0.2 ping statistics --- 00:28:22.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.417 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:22.417 00:28:22.417 --- 10.0.0.1 ping statistics --- 00:28:22.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.417 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:22.417 net.core.busy_poll = 1 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:22.417 net.core.busy_read = 1 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:22.417 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=419583 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 419583 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 419583 ']' 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.417 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 [2024-12-13 12:33:50.137069] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:22.682 [2024-12-13 12:33:50.137114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.682 [2024-12-13 12:33:50.212842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.682 [2024-12-13 12:33:50.236139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.682 [2024-12-13 12:33:50.236185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.682 [2024-12-13 12:33:50.236193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.682 [2024-12-13 12:33:50.236199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.682 [2024-12-13 12:33:50.236204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.682 [2024-12-13 12:33:50.240801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.682 [2024-12-13 12:33:50.240830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.682 [2024-12-13 12:33:50.240935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.682 [2024-12-13 12:33:50.240936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 [2024-12-13 12:33:50.468932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 Malloc1 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.945 [2024-12-13 12:33:50.530305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=419813 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:22.945 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:24.881 "tick_rate": 2100000000, 00:28:24.881 "poll_groups": [ 00:28:24.881 { 00:28:24.881 "name": "nvmf_tgt_poll_group_000", 00:28:24.881 "admin_qpairs": 1, 00:28:24.881 "io_qpairs": 2, 00:28:24.881 "current_admin_qpairs": 1, 00:28:24.881 "current_io_qpairs": 2, 00:28:24.881 "pending_bdev_io": 0, 00:28:24.881 "completed_nvme_io": 28522, 00:28:24.881 "transports": [ 00:28:24.881 { 00:28:24.881 "trtype": "TCP" 00:28:24.881 } 00:28:24.881 ] 00:28:24.881 }, 00:28:24.881 { 00:28:24.881 "name": "nvmf_tgt_poll_group_001", 00:28:24.881 "admin_qpairs": 0, 00:28:24.881 "io_qpairs": 2, 00:28:24.881 "current_admin_qpairs": 0, 00:28:24.881 "current_io_qpairs": 2, 00:28:24.881 "pending_bdev_io": 0, 00:28:24.881 "completed_nvme_io": 28303, 00:28:24.881 "transports": [ 00:28:24.881 { 00:28:24.881 "trtype": "TCP" 00:28:24.881 } 00:28:24.881 ] 00:28:24.881 }, 00:28:24.881 { 00:28:24.881 "name": "nvmf_tgt_poll_group_002", 00:28:24.881 "admin_qpairs": 0, 00:28:24.881 "io_qpairs": 0, 00:28:24.881 "current_admin_qpairs": 0, 00:28:24.881 "current_io_qpairs": 0, 00:28:24.881 "pending_bdev_io": 0, 00:28:24.881 "completed_nvme_io": 0, 00:28:24.881 "transports": [ 00:28:24.881 { 00:28:24.881 "trtype": "TCP" 00:28:24.881 } 00:28:24.881 ] 00:28:24.881 }, 00:28:24.881 { 00:28:24.881 "name": "nvmf_tgt_poll_group_003", 00:28:24.881 "admin_qpairs": 0, 00:28:24.881 "io_qpairs": 0, 00:28:24.881 "current_admin_qpairs": 0, 00:28:24.881 "current_io_qpairs": 0, 00:28:24.881 "pending_bdev_io": 0, 00:28:24.881 "completed_nvme_io": 0, 00:28:24.881 "transports": [ 00:28:24.881 { 00:28:24.881 "trtype": "TCP" 00:28:24.881 } 00:28:24.881 ] 00:28:24.881 } 00:28:24.881 ] 00:28:24.881 }' 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:24.881 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:25.155 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:25.155 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:25.155 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 419813 00:28:33.525 Initializing NVMe Controllers 00:28:33.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:33.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:33.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:33.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:33.525 Initialization complete. Launching workers. 00:28:33.525 ======================================================== 00:28:33.525 Latency(us) 00:28:33.525 Device Information : IOPS MiB/s Average min max 00:28:33.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7578.30 29.60 8447.58 1245.79 52435.87 00:28:33.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7438.30 29.06 8603.96 1476.58 53582.07 00:28:33.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7275.80 28.42 8795.34 1456.25 53091.98 00:28:33.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7351.30 28.72 8708.06 1481.98 53655.67 00:28:33.525 ======================================================== 00:28:33.525 Total : 29643.69 115.80 8636.77 1245.79 53655.67 00:28:33.525 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:33.525 rmmod nvme_tcp 00:28:33.525 rmmod nvme_fabrics 00:28:33.525 rmmod nvme_keyring 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 419583 ']' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 419583 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 419583 ']' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 419583 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419583 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419583' 00:28:33.525 killing process with pid 419583 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 419583 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 419583 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.525 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:36.815 00:28:36.815 real 0m52.094s 00:28:36.815 user 2m43.846s 00:28:36.815 sys 0m11.318s 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.815 ************************************ 00:28:36.815 END TEST nvmf_perf_adq 00:28:36.815 ************************************ 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:36.815 ************************************ 00:28:36.815 START TEST nvmf_shutdown 00:28:36.815 ************************************ 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:36.815 * Looking for test storage... 00:28:36.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.815 --rc genhtml_branch_coverage=1 00:28:36.815 --rc genhtml_function_coverage=1 00:28:36.815 --rc genhtml_legend=1 00:28:36.815 --rc geninfo_all_blocks=1 00:28:36.815 --rc geninfo_unexecuted_blocks=1 00:28:36.815 00:28:36.815 ' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.815 --rc genhtml_branch_coverage=1 00:28:36.815 --rc genhtml_function_coverage=1 00:28:36.815 --rc genhtml_legend=1 00:28:36.815 --rc geninfo_all_blocks=1 00:28:36.815 --rc geninfo_unexecuted_blocks=1 00:28:36.815 00:28:36.815 ' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.815 --rc genhtml_branch_coverage=1 00:28:36.815 --rc genhtml_function_coverage=1 00:28:36.815 --rc genhtml_legend=1 00:28:36.815 --rc geninfo_all_blocks=1 00:28:36.815 --rc geninfo_unexecuted_blocks=1 00:28:36.815 00:28:36.815 ' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.815 --rc genhtml_branch_coverage=1 00:28:36.815 --rc genhtml_function_coverage=1 00:28:36.815 --rc genhtml_legend=1 00:28:36.815 --rc geninfo_all_blocks=1 00:28:36.815 --rc geninfo_unexecuted_blocks=1 00:28:36.815 00:28:36.815 ' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:36.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.816 ************************************ 00:28:36.816 START TEST nvmf_shutdown_tc1 00:28:36.816 ************************************ 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.816 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.387 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.387 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.387 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.387 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.387 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.387 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.387 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:28:43.387 00:28:43.387 --- 10.0.0.2 ping statistics --- 00:28:43.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.387 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:28:43.387 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:28:43.387 00:28:43.387 --- 10.0.0.1 ping statistics --- 00:28:43.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.387 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=425088 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 425088 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425088 ']' 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 [2024-12-13 12:34:10.363990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:43.388 [2024-12-13 12:34:10.364041] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.388 [2024-12-13 12:34:10.444768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:43.388 [2024-12-13 12:34:10.467640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.388 [2024-12-13 12:34:10.467676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.388 [2024-12-13 12:34:10.467683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.388 [2024-12-13 12:34:10.467693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.388 [2024-12-13 12:34:10.467698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.388 [2024-12-13 12:34:10.469202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.388 [2024-12-13 12:34:10.469311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.388 [2024-12-13 12:34:10.469419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.388 [2024-12-13 12:34:10.469420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 [2024-12-13 12:34:10.601555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.388 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 Malloc1 00:28:43.388 [2024-12-13 12:34:10.716779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.388 Malloc2 00:28:43.388 Malloc3 00:28:43.388 Malloc4 00:28:43.388 Malloc5 00:28:43.388 Malloc6 00:28:43.388 Malloc7 00:28:43.388 Malloc8 00:28:43.388 Malloc9 00:28:43.388 Malloc10 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=425236 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 425236 /var/tmp/bdevperf.sock 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425236 ']' 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.648 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 [2024-12-13 12:34:11.187169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:43.649 [2024-12-13 12:34:11.187219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.649 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.649 { 00:28:43.649 "params": { 00:28:43.649 "name": "Nvme$subsystem", 00:28:43.649 "trtype": "$TEST_TRANSPORT", 00:28:43.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.649 "adrfam": "ipv4", 00:28:43.649 "trsvcid": "$NVMF_PORT", 00:28:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.649 "hdgst": ${hdgst:-false}, 00:28:43.649 "ddgst": ${ddgst:-false} 00:28:43.649 }, 00:28:43.649 "method": "bdev_nvme_attach_controller" 00:28:43.649 } 00:28:43.649 EOF 00:28:43.649 )") 00:28:43.650 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:43.650 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:43.650 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:43.650 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme1", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme2", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme3", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme4", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme5", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme6", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme7", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme8", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme9", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 },{ 00:28:43.650 "params": { 00:28:43.650 "name": "Nvme10", 00:28:43.650 "trtype": "tcp", 00:28:43.650 "traddr": "10.0.0.2", 00:28:43.650 "adrfam": "ipv4", 00:28:43.650 "trsvcid": "4420", 00:28:43.650 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:43.650 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:43.650 "hdgst": false, 00:28:43.650 "ddgst": false 00:28:43.650 }, 00:28:43.650 "method": "bdev_nvme_attach_controller" 00:28:43.650 }' 00:28:43.650 [2024-12-13 12:34:11.263384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.650 [2024-12-13 12:34:11.285774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 425236 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:45.554 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:46.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 425236 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 425088 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.492 { 00:28:46.492 "params": { 00:28:46.492 "name": "Nvme$subsystem", 00:28:46.492 "trtype": "$TEST_TRANSPORT", 00:28:46.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.492 "adrfam": "ipv4", 00:28:46.492 "trsvcid": "$NVMF_PORT", 00:28:46.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.492 "hdgst": ${hdgst:-false}, 00:28:46.492 "ddgst": ${ddgst:-false} 00:28:46.492 }, 00:28:46.492 "method": "bdev_nvme_attach_controller" 00:28:46.492 } 00:28:46.492 EOF 00:28:46.492 )") 00:28:46.492 [2024-12-13 12:34:14.124835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:46.492 [2024-12-13 12:34:14.124884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425710 ] 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.492 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.493 { 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme$subsystem", 00:28:46.493 "trtype": "$TEST_TRANSPORT", 00:28:46.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "$NVMF_PORT", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.493 "hdgst": ${hdgst:-false}, 00:28:46.493 "ddgst": ${ddgst:-false} 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 } 00:28:46.493 EOF 00:28:46.493 )") 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.493 { 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme$subsystem", 00:28:46.493 "trtype": "$TEST_TRANSPORT", 00:28:46.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "$NVMF_PORT", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.493 "hdgst": ${hdgst:-false}, 00:28:46.493 "ddgst": ${ddgst:-false} 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 } 00:28:46.493 EOF 00:28:46.493 )") 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.493 { 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme$subsystem", 00:28:46.493 "trtype": "$TEST_TRANSPORT", 00:28:46.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "$NVMF_PORT", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.493 "hdgst": ${hdgst:-false}, 00:28:46.493 "ddgst": ${ddgst:-false} 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 } 00:28:46.493 EOF 00:28:46.493 )") 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:46.493 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme1", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme2", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme3", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme4", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme5", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme6", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme7", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme8", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme9", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 },{ 00:28:46.493 "params": { 00:28:46.493 "name": "Nvme10", 00:28:46.493 "trtype": "tcp", 00:28:46.493 "traddr": "10.0.0.2", 00:28:46.493 "adrfam": "ipv4", 00:28:46.493 "trsvcid": "4420", 00:28:46.493 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:46.493 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:46.493 "hdgst": false, 00:28:46.493 "ddgst": false 00:28:46.493 }, 00:28:46.493 "method": "bdev_nvme_attach_controller" 00:28:46.493 }' 00:28:46.752 [2024-12-13 12:34:14.203399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.752 [2024-12-13 12:34:14.225831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.129 Running I/O for 1 seconds... 00:28:49.325 2250.00 IOPS, 140.62 MiB/s 00:28:49.325 Latency(us) 00:28:49.325 [2024-12-13T11:34:17.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.325 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme1n1 : 1.12 290.72 18.17 0.00 0.00 216439.01 18849.40 193736.90 00:28:49.325 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme2n1 : 1.07 239.53 14.97 0.00 0.00 260912.03 16727.28 225693.50 00:28:49.325 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme3n1 : 1.13 284.24 17.76 0.00 0.00 216968.44 14355.50 217704.35 00:28:49.325 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme4n1 : 1.11 287.63 17.98 0.00 0.00 209218.02 17101.78 203723.34 00:28:49.325 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme5n1 : 1.14 284.46 17.78 0.00 0.00 210670.79 4930.80 218702.99 00:28:49.325 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme6n1 : 1.13 282.02 17.63 0.00 0.00 209289.02 16477.62 214708.42 00:28:49.325 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme7n1 : 1.14 279.95 17.50 0.00 0.00 208016.73 13044.78 220700.28 00:28:49.325 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.325 Nvme8n1 : 1.13 285.98 17.87 0.00 0.00 200153.72 2465.40 214708.42 00:28:49.325 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.325 Verification LBA range: start 0x0 length 0x400 00:28:49.326 Nvme9n1 : 1.15 279.01 17.44 0.00 0.00 202638.48 19348.72 227690.79 00:28:49.326 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.326 Verification LBA range: start 0x0 length 0x400 00:28:49.326 Nvme10n1 : 1.15 278.46 17.40 0.00 0.00 200011.29 15291.73 242670.45 00:28:49.326 [2024-12-13T11:34:17.026Z] =================================================================================================================== 00:28:49.326 [2024-12-13T11:34:17.026Z] Total : 2792.00 174.50 0.00 0.00 212452.44 2465.40 242670.45 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.585 rmmod nvme_tcp 00:28:49.585 rmmod nvme_fabrics 00:28:49.585 rmmod nvme_keyring 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 425088 ']' 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 425088 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 425088 ']' 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 425088 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425088 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425088' 00:28:49.585 killing process with pid 425088 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 425088 00:28:49.585 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 425088 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.153 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.060 00:28:52.060 real 0m15.273s 00:28:52.060 user 0m34.355s 00:28:52.060 sys 0m5.786s 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:52.060 ************************************ 00:28:52.060 END TEST nvmf_shutdown_tc1 00:28:52.060 ************************************ 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:52.060 ************************************ 00:28:52.060 START TEST nvmf_shutdown_tc2 00:28:52.060 ************************************ 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:52.060 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:52.061 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:52.061 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:52.061 Found net devices under 0000:af:00.0: cvl_0_0 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:52.061 Found net devices under 0000:af:00.1: cvl_0_1 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.061 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:28:52.321 00:28:52.321 --- 10.0.0.2 ping statistics --- 00:28:52.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.321 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:28:52.321 00:28:52.321 --- 10.0.0.1 ping statistics --- 00:28:52.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.321 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.321 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=426723 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 426723 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426723 ']' 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.321 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.580 [2024-12-13 12:34:20.066649] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:52.580 [2024-12-13 12:34:20.066697] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.580 [2024-12-13 12:34:20.127349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.580 [2024-12-13 12:34:20.150088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.580 [2024-12-13 12:34:20.150126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.580 [2024-12-13 12:34:20.150133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.580 [2024-12-13 12:34:20.150139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.580 [2024-12-13 12:34:20.150144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.580 [2024-12-13 12:34:20.151592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.580 [2024-12-13 12:34:20.151635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.580 [2024-12-13 12:34:20.151657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.580 [2024-12-13 12:34:20.151657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:52.580 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.580 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:52.580 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.580 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.580 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.839 [2024-12-13 12:34:20.294978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.839 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.840 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.840 Malloc1 00:28:52.840 [2024-12-13 12:34:20.408873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.840 Malloc2 00:28:52.840 Malloc3 00:28:52.840 Malloc4 00:28:53.099 Malloc5 00:28:53.099 Malloc6 00:28:53.099 Malloc7 00:28:53.099 Malloc8 00:28:53.099 Malloc9 00:28:53.099 Malloc10 00:28:53.099 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.099 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:53.099 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.099 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=426982 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 426982 /var/tmp/bdevperf.sock 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426982 ']' 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:53.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.359 { 00:28:53.359 "params": { 00:28:53.359 "name": "Nvme$subsystem", 00:28:53.359 "trtype": "$TEST_TRANSPORT", 00:28:53.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.359 "adrfam": "ipv4", 00:28:53.359 "trsvcid": "$NVMF_PORT", 00:28:53.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.359 "hdgst": ${hdgst:-false}, 00:28:53.359 "ddgst": ${ddgst:-false} 00:28:53.359 }, 00:28:53.359 "method": "bdev_nvme_attach_controller" 00:28:53.359 } 00:28:53.359 EOF 00:28:53.359 )") 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.359 { 00:28:53.359 "params": { 00:28:53.359 "name": "Nvme$subsystem", 00:28:53.359 "trtype": "$TEST_TRANSPORT", 00:28:53.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.359 "adrfam": "ipv4", 00:28:53.359 "trsvcid": "$NVMF_PORT", 00:28:53.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.359 "hdgst": ${hdgst:-false}, 00:28:53.359 "ddgst": ${ddgst:-false} 00:28:53.359 }, 00:28:53.359 "method": "bdev_nvme_attach_controller" 00:28:53.359 } 00:28:53.359 EOF 00:28:53.359 )") 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.359 { 00:28:53.359 "params": { 00:28:53.359 "name": "Nvme$subsystem", 00:28:53.359 "trtype": "$TEST_TRANSPORT", 00:28:53.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.359 "adrfam": "ipv4", 00:28:53.359 "trsvcid": "$NVMF_PORT", 00:28:53.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.359 "hdgst": ${hdgst:-false}, 00:28:53.359 "ddgst": ${ddgst:-false} 00:28:53.359 }, 00:28:53.359 "method": "bdev_nvme_attach_controller" 00:28:53.359 } 00:28:53.359 EOF 00:28:53.359 )") 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.359 { 00:28:53.359 "params": { 00:28:53.359 "name": "Nvme$subsystem", 00:28:53.359 "trtype": "$TEST_TRANSPORT", 00:28:53.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.359 "adrfam": "ipv4", 00:28:53.359 "trsvcid": "$NVMF_PORT", 00:28:53.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.359 "hdgst": ${hdgst:-false}, 00:28:53.359 "ddgst": ${ddgst:-false} 00:28:53.359 }, 00:28:53.359 "method": "bdev_nvme_attach_controller" 00:28:53.359 } 00:28:53.359 EOF 00:28:53.359 )") 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.359 { 00:28:53.359 "params": { 00:28:53.359 "name": "Nvme$subsystem", 00:28:53.359 "trtype": "$TEST_TRANSPORT", 00:28:53.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.359 "adrfam": "ipv4", 00:28:53.359 "trsvcid": "$NVMF_PORT", 00:28:53.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.359 "hdgst": ${hdgst:-false}, 00:28:53.359 "ddgst": ${ddgst:-false} 00:28:53.359 }, 00:28:53.359 "method": "bdev_nvme_attach_controller" 00:28:53.359 } 00:28:53.359 EOF 00:28:53.359 )") 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.359 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.359 { 00:28:53.359 "params": { 00:28:53.359 "name": "Nvme$subsystem", 00:28:53.359 "trtype": "$TEST_TRANSPORT", 00:28:53.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "$NVMF_PORT", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.360 "hdgst": ${hdgst:-false}, 00:28:53.360 "ddgst": ${ddgst:-false} 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 } 00:28:53.360 EOF 00:28:53.360 )") 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.360 { 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme$subsystem", 00:28:53.360 "trtype": "$TEST_TRANSPORT", 00:28:53.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "$NVMF_PORT", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.360 "hdgst": ${hdgst:-false}, 00:28:53.360 "ddgst": ${ddgst:-false} 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 } 00:28:53.360 EOF 00:28:53.360 )") 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.360 [2024-12-13 12:34:20.878199] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:53.360 [2024-12-13 12:34:20.878248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426982 ] 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.360 { 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme$subsystem", 00:28:53.360 "trtype": "$TEST_TRANSPORT", 00:28:53.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "$NVMF_PORT", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.360 "hdgst": ${hdgst:-false}, 00:28:53.360 "ddgst": ${ddgst:-false} 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 } 00:28:53.360 EOF 00:28:53.360 )") 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.360 { 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme$subsystem", 00:28:53.360 "trtype": "$TEST_TRANSPORT", 00:28:53.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "$NVMF_PORT", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.360 "hdgst": ${hdgst:-false}, 00:28:53.360 "ddgst": ${ddgst:-false} 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 } 00:28:53.360 EOF 00:28:53.360 )") 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.360 { 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme$subsystem", 00:28:53.360 "trtype": "$TEST_TRANSPORT", 00:28:53.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "$NVMF_PORT", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.360 "hdgst": ${hdgst:-false}, 00:28:53.360 "ddgst": ${ddgst:-false} 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 } 00:28:53.360 EOF 00:28:53.360 )") 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:53.360 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme1", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme2", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme3", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme4", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme5", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme6", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme7", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme8", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme9", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 },{ 00:28:53.360 "params": { 00:28:53.360 "name": "Nvme10", 00:28:53.360 "trtype": "tcp", 00:28:53.360 "traddr": "10.0.0.2", 00:28:53.360 "adrfam": "ipv4", 00:28:53.360 "trsvcid": "4420", 00:28:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:53.360 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:53.360 "hdgst": false, 00:28:53.360 "ddgst": false 00:28:53.360 }, 00:28:53.360 "method": "bdev_nvme_attach_controller" 00:28:53.360 }' 00:28:53.360 [2024-12-13 12:34:20.952184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.360 [2024-12-13 12:34:20.974697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.266 Running I/O for 10 seconds... 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:55.266 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 426982 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426982 ']' 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426982 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426982 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426982' 00:28:55.525 killing process with pid 426982 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426982 00:28:55.525 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426982 00:28:55.784 Received shutdown signal, test time was about 0.664026 seconds 00:28:55.784 00:28:55.784 Latency(us) 00:28:55.784 [2024-12-13T11:34:23.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme1n1 : 0.65 303.78 18.99 0.00 0.00 205932.15 3651.29 207717.91 00:28:55.784 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme2n1 : 0.66 289.43 18.09 0.00 0.00 212619.46 16852.11 220700.28 00:28:55.784 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme3n1 : 0.64 300.50 18.78 0.00 0.00 198615.69 14979.66 200727.41 00:28:55.784 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme4n1 : 0.65 305.27 19.08 0.00 0.00 190127.15 3183.18 207717.91 00:28:55.784 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme5n1 : 0.66 290.56 18.16 0.00 0.00 196394.75 21720.50 204721.98 00:28:55.784 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme6n1 : 0.65 295.32 18.46 0.00 0.00 187558.12 20846.69 187745.04 00:28:55.784 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme7n1 : 0.65 293.94 18.37 0.00 0.00 183417.58 16103.13 209715.20 00:28:55.784 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme8n1 : 0.66 298.33 18.65 0.00 0.00 175715.06 1092.27 206719.27 00:28:55.784 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme9n1 : 0.63 201.60 12.60 0.00 0.00 249001.20 27587.54 219701.64 00:28:55.784 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:55.784 Verification LBA range: start 0x0 length 0x400 00:28:55.784 Nvme10n1 : 0.63 202.48 12.65 0.00 0.00 242064.82 17850.76 232684.01 00:28:55.784 [2024-12-13T11:34:23.484Z] =================================================================================================================== 00:28:55.784 [2024-12-13T11:34:23.484Z] Total : 2781.22 173.83 0.00 0.00 201114.37 1092.27 232684.01 00:28:55.784 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 426723 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.162 rmmod nvme_tcp 00:28:57.162 rmmod nvme_fabrics 00:28:57.162 rmmod nvme_keyring 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:57.162 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 426723 ']' 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 426723 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426723 ']' 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426723 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426723 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426723' 00:28:57.163 killing process with pid 426723 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426723 00:28:57.163 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426723 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.422 12:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.327 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.327 00:28:59.327 real 0m7.276s 00:28:59.327 user 0m21.569s 00:28:59.327 sys 0m1.254s 00:28:59.327 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.327 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.327 ************************************ 00:28:59.327 END TEST nvmf_shutdown_tc2 00:28:59.327 ************************************ 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 ************************************ 00:28:59.587 START TEST nvmf_shutdown_tc3 00:28:59.587 ************************************ 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:59.587 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:59.587 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:59.587 Found net devices under 0000:af:00.0: cvl_0_0 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:59.587 Found net devices under 0000:af:00.1: cvl_0_1 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.587 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:28:59.847 00:28:59.847 --- 10.0.0.2 ping statistics --- 00:28:59.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.847 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:28:59.847 00:28:59.847 --- 10.0.0.1 ping statistics --- 00:28:59.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.847 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=428146 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 428146 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428146 ']' 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.847 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.847 [2024-12-13 12:34:27.460197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:59.847 [2024-12-13 12:34:27.460239] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.847 [2024-12-13 12:34:27.538317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.107 [2024-12-13 12:34:27.560110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.107 [2024-12-13 12:34:27.560151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.107 [2024-12-13 12:34:27.560159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.107 [2024-12-13 12:34:27.560165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.107 [2024-12-13 12:34:27.560170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.107 [2024-12-13 12:34:27.561621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.107 [2024-12-13 12:34:27.561731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.107 [2024-12-13 12:34:27.561837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.107 [2024-12-13 12:34:27.561838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.107 [2024-12-13 12:34:27.701481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.107 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.108 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.108 Malloc1 00:29:00.367 [2024-12-13 12:34:27.816171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.367 Malloc2 00:29:00.367 Malloc3 00:29:00.367 Malloc4 00:29:00.367 Malloc5 00:29:00.367 Malloc6 00:29:00.367 Malloc7 00:29:00.626 Malloc8 00:29:00.626 Malloc9 00:29:00.626 Malloc10 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=428275 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 428275 /var/tmp/bdevperf.sock 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428275 ']' 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.626 { 00:29:00.626 "params": { 00:29:00.626 "name": "Nvme$subsystem", 00:29:00.626 "trtype": "$TEST_TRANSPORT", 00:29:00.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.626 "adrfam": "ipv4", 00:29:00.626 "trsvcid": "$NVMF_PORT", 00:29:00.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.626 "hdgst": ${hdgst:-false}, 00:29:00.626 "ddgst": ${ddgst:-false} 00:29:00.626 }, 00:29:00.626 "method": "bdev_nvme_attach_controller" 00:29:00.626 } 00:29:00.626 EOF 00:29:00.626 )") 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.626 { 00:29:00.626 "params": { 00:29:00.626 "name": "Nvme$subsystem", 00:29:00.626 "trtype": "$TEST_TRANSPORT", 00:29:00.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.626 "adrfam": "ipv4", 00:29:00.626 "trsvcid": "$NVMF_PORT", 00:29:00.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.626 "hdgst": ${hdgst:-false}, 00:29:00.626 "ddgst": ${ddgst:-false} 00:29:00.626 }, 00:29:00.626 "method": "bdev_nvme_attach_controller" 00:29:00.626 } 00:29:00.626 EOF 00:29:00.626 )") 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.626 { 00:29:00.626 "params": { 00:29:00.626 "name": "Nvme$subsystem", 00:29:00.626 "trtype": "$TEST_TRANSPORT", 00:29:00.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.626 "adrfam": "ipv4", 00:29:00.626 "trsvcid": "$NVMF_PORT", 00:29:00.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.626 "hdgst": ${hdgst:-false}, 00:29:00.626 "ddgst": ${ddgst:-false} 00:29:00.626 }, 00:29:00.626 "method": "bdev_nvme_attach_controller" 00:29:00.626 } 00:29:00.626 EOF 00:29:00.626 )") 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.626 { 00:29:00.626 "params": { 00:29:00.626 "name": "Nvme$subsystem", 00:29:00.626 "trtype": "$TEST_TRANSPORT", 00:29:00.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.626 "adrfam": "ipv4", 00:29:00.626 "trsvcid": "$NVMF_PORT", 00:29:00.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.626 "hdgst": ${hdgst:-false}, 00:29:00.626 "ddgst": ${ddgst:-false} 00:29:00.626 }, 00:29:00.626 "method": "bdev_nvme_attach_controller" 00:29:00.626 } 00:29:00.626 EOF 00:29:00.626 )") 00:29:00.626 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.627 { 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme$subsystem", 00:29:00.627 "trtype": "$TEST_TRANSPORT", 00:29:00.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "$NVMF_PORT", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.627 "hdgst": ${hdgst:-false}, 00:29:00.627 "ddgst": ${ddgst:-false} 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 } 00:29:00.627 EOF 00:29:00.627 )") 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.627 { 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme$subsystem", 00:29:00.627 "trtype": "$TEST_TRANSPORT", 00:29:00.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "$NVMF_PORT", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.627 "hdgst": ${hdgst:-false}, 00:29:00.627 "ddgst": ${ddgst:-false} 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 } 00:29:00.627 EOF 00:29:00.627 )") 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.627 { 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme$subsystem", 00:29:00.627 "trtype": "$TEST_TRANSPORT", 00:29:00.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "$NVMF_PORT", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.627 "hdgst": ${hdgst:-false}, 00:29:00.627 "ddgst": ${ddgst:-false} 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 } 00:29:00.627 EOF 00:29:00.627 )") 00:29:00.627 [2024-12-13 12:34:28.286631] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:00.627 [2024-12-13 12:34:28.286680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428275 ] 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.627 { 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme$subsystem", 00:29:00.627 "trtype": "$TEST_TRANSPORT", 00:29:00.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "$NVMF_PORT", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.627 "hdgst": ${hdgst:-false}, 00:29:00.627 "ddgst": ${ddgst:-false} 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 } 00:29:00.627 EOF 00:29:00.627 )") 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.627 { 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme$subsystem", 00:29:00.627 "trtype": "$TEST_TRANSPORT", 00:29:00.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "$NVMF_PORT", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.627 "hdgst": ${hdgst:-false}, 00:29:00.627 "ddgst": ${ddgst:-false} 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 } 00:29:00.627 EOF 00:29:00.627 )") 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.627 { 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme$subsystem", 00:29:00.627 "trtype": "$TEST_TRANSPORT", 00:29:00.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "$NVMF_PORT", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.627 "hdgst": ${hdgst:-false}, 00:29:00.627 "ddgst": ${ddgst:-false} 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 } 00:29:00.627 EOF 00:29:00.627 )") 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:00.627 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme1", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme2", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme3", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme4", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme5", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme6", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme7", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme8", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.627 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.627 "hdgst": false, 00:29:00.627 "ddgst": false 00:29:00.627 }, 00:29:00.627 "method": "bdev_nvme_attach_controller" 00:29:00.627 },{ 00:29:00.627 "params": { 00:29:00.627 "name": "Nvme9", 00:29:00.627 "trtype": "tcp", 00:29:00.627 "traddr": "10.0.0.2", 00:29:00.627 "adrfam": "ipv4", 00:29:00.627 "trsvcid": "4420", 00:29:00.627 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.628 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.628 "hdgst": false, 00:29:00.628 "ddgst": false 00:29:00.628 }, 00:29:00.628 "method": "bdev_nvme_attach_controller" 00:29:00.628 },{ 00:29:00.628 "params": { 00:29:00.628 "name": "Nvme10", 00:29:00.628 "trtype": "tcp", 00:29:00.628 "traddr": "10.0.0.2", 00:29:00.628 "adrfam": "ipv4", 00:29:00.628 "trsvcid": "4420", 00:29:00.628 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.628 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.628 "hdgst": false, 00:29:00.628 "ddgst": false 00:29:00.628 }, 00:29:00.628 "method": "bdev_nvme_attach_controller" 00:29:00.628 }' 00:29:00.886 [2024-12-13 12:34:28.359252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.886 [2024-12-13 12:34:28.382049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.264 Running I/O for 10 seconds... 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.523 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.782 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.782 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:02.782 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:02.782 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 428146 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428146 ']' 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428146 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428146 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428146' 00:29:03.057 killing process with pid 428146 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 428146 00:29:03.057 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 428146 00:29:03.057 [2024-12-13 12:34:30.604903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.604972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.604980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.604988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.604996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.057 [2024-12-13 12:34:30.605166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.605403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc44b40 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.607968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.058 [2024-12-13 12:34:30.608343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.608433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45030 is same with the state(6) to be set 00:29:03.059 [2024-12-13 12:34:30.609273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.059 [2024-12-13 12:34:30.609853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.059 [2024-12-13 12:34:30.609860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.609986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.609995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45500 is same with t[2024-12-13 12:34:30.610050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:03.060 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45500 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.610067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45500 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.610076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.060 [2024-12-13 12:34:30.610336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.060 [2024-12-13 12:34:30.610908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc459f0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.610935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc459f0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.610942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc459f0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.610949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc459f0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.060 [2024-12-13 12:34:30.611508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.611826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45ec0 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.061 [2024-12-13 12:34:30.612927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.612995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.613189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46240 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.062 [2024-12-13 12:34:30.614550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.614652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46710 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.615913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46c00 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.063 [2024-12-13 12:34:30.616535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.616910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc470d0 is same with the state(6) to be set 00:29:03.064 [2024-12-13 12:34:30.622473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.064 [2024-12-13 12:34:30.622716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.064 [2024-12-13 12:34:30.622723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.622990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.622997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.065 [2024-12-13 12:34:30.623393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.065 [2024-12-13 12:34:30.623400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.066 [2024-12-13 12:34:30.623543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.623571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.066 [2024-12-13 12:34:30.624774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:03.066 [2024-12-13 12:34:30.624841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390920 (9): Bad file descriptor 00:29:03.066 [2024-12-13 12:34:30.624871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.624882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.624889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.624897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.624905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.624913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.624921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.624928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.624935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60610 is same with the state(6) to be set 00:29:03.066 [2024-12-13 12:34:30.624960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.624969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.624976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.624983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23646d0 is same with the state(6) to be set 00:29:03.066 [2024-12-13 12:34:30.625054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1510 is same with the state(6) to be set 00:29:03.066 [2024-12-13 12:34:30.625145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2368c00 is same with the state(6) to be set 00:29:03.066 [2024-12-13 12:34:30.625238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30c40 is same with the state(6) to be set 00:29:03.066 [2024-12-13 12:34:30.625323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.066 [2024-12-13 12:34:30.625354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.066 [2024-12-13 12:34:30.625362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f25270 is same with the state(6) to be set 00:29:03.067 [2024-12-13 12:34:30.625404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b0b0 is same with the state(6) to be set 00:29:03.067 [2024-12-13 12:34:30.625484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f970 is same with the state(6) to be set 00:29:03.067 [2024-12-13 12:34:30.625567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.067 [2024-12-13 12:34:30.625622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f33420 is same with the state(6) to be set 00:29:03.067 [2024-12-13 12:34:30.625687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.625989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.625996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.067 [2024-12-13 12:34:30.626100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.067 [2024-12-13 12:34:30.626107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.068 [2024-12-13 12:34:30.626903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.068 [2024-12-13 12:34:30.626911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.626919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.626928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.626935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.626943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.626950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.626958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.626968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.626977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.626985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.626993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.069 [2024-12-13 12:34:30.627558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.069 [2024-12-13 12:34:30.627564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.627887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.070 [2024-12-13 12:34:30.627894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.070 [2024-12-13 12:34:30.629111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:03.070 [2024-12-13 12:34:30.629147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f25270 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.631405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:03.070 [2024-12-13 12:34:30.631435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:03.070 [2024-12-13 12:34:30.631449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f30c40 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.631460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33420 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.631635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.070 [2024-12-13 12:34:30.631651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2390920 with addr=10.0.0.2, port=4420 00:29:03.070 [2024-12-13 12:34:30.631660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390920 is same with the state(6) to be set 00:29:03.070 [2024-12-13 12:34:30.632343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.070 [2024-12-13 12:34:30.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f25270 with addr=10.0.0.2, port=4420 00:29:03.070 [2024-12-13 12:34:30.632377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f25270 is same with the state(6) to be set 00:29:03.070 [2024-12-13 12:34:30.632402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390920 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.632665] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.070 [2024-12-13 12:34:30.632943] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.070 [2024-12-13 12:34:30.632991] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.070 [2024-12-13 12:34:30.633034] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.070 [2024-12-13 12:34:30.633090] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.070 [2024-12-13 12:34:30.633133] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.070 [2024-12-13 12:34:30.633252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.070 [2024-12-13 12:34:30.633267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33420 with addr=10.0.0.2, port=4420 00:29:03.070 [2024-12-13 12:34:30.633277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f33420 is same with the state(6) to be set 00:29:03.070 [2024-12-13 12:34:30.633422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.070 [2024-12-13 12:34:30.633434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f30c40 with addr=10.0.0.2, port=4420 00:29:03.070 [2024-12-13 12:34:30.633443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30c40 is same with the state(6) to be set 00:29:03.070 [2024-12-13 12:34:30.633453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f25270 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.633463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:03.070 [2024-12-13 12:34:30.633470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:03.070 [2024-12-13 12:34:30.633480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:03.070 [2024-12-13 12:34:30.633491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:03.070 [2024-12-13 12:34:30.633588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33420 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.633600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f30c40 (9): Bad file descriptor 00:29:03.070 [2024-12-13 12:34:30.633607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:03.070 [2024-12-13 12:34:30.633615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.633622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.633629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.633675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:03.071 [2024-12-13 12:34:30.633684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.633690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.633697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.633704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:03.071 [2024-12-13 12:34:30.633710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.633717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.633724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.634799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e60610 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.634819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23646d0 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.634836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a1510 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.634856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2368c00 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.634874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b0b0 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.634889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236f970 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.639243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:03.071 [2024-12-13 12:34:30.639452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.071 [2024-12-13 12:34:30.639467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2390920 with addr=10.0.0.2, port=4420 00:29:03.071 [2024-12-13 12:34:30.639476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390920 is same with the state(6) to be set 00:29:03.071 [2024-12-13 12:34:30.639508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390920 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.639538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:03.071 [2024-12-13 12:34:30.639546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.639555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.639562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.641767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:03.071 [2024-12-13 12:34:30.642031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.071 [2024-12-13 12:34:30.642046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f25270 with addr=10.0.0.2, port=4420 00:29:03.071 [2024-12-13 12:34:30.642054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f25270 is same with the state(6) to be set 00:29:03.071 [2024-12-13 12:34:30.642085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f25270 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.642116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:03.071 [2024-12-13 12:34:30.642125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.642134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.642140] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.642492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:03.071 [2024-12-13 12:34:30.642506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:03.071 [2024-12-13 12:34:30.642727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.071 [2024-12-13 12:34:30.642742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f30c40 with addr=10.0.0.2, port=4420 00:29:03.071 [2024-12-13 12:34:30.642751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30c40 is same with the state(6) to be set 00:29:03.071 [2024-12-13 12:34:30.642893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.071 [2024-12-13 12:34:30.642907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33420 with addr=10.0.0.2, port=4420 00:29:03.071 [2024-12-13 12:34:30.642916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f33420 is same with the state(6) to be set 00:29:03.071 [2024-12-13 12:34:30.642951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f30c40 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.642963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33420 (9): Bad file descriptor 00:29:03.071 [2024-12-13 12:34:30.642992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:03.071 [2024-12-13 12:34:30.643001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.643010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.643016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.643024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:03.071 [2024-12-13 12:34:30.643031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:03.071 [2024-12-13 12:34:30.643038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:03.071 [2024-12-13 12:34:30.643044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:03.071 [2024-12-13 12:34:30.644935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.644956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.644971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.644979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.644988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.644995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.071 [2024-12-13 12:34:30.645238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.071 [2024-12-13 12:34:30.645246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.072 [2024-12-13 12:34:30.645926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.072 [2024-12-13 12:34:30.645935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.645945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.645954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.645961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.645970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.645978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.645987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.645995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.646004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.646011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.646020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.646027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.646036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36e10 is same with the state(6) to be set 00:29:03.073 [2024-12-13 12:34:30.647037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.073 [2024-12-13 12:34:30.647594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.073 [2024-12-13 12:34:30.647602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.647988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.647995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.648010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.648027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.648042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.648058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.648073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.648090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.648097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ae730 is same with the state(6) to be set 00:29:03.074 [2024-12-13 12:34:30.649080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.649095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.649108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.649117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.649127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.649135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.649145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.649153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.649162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.649172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.649181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.074 [2024-12-13 12:34:30.649188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.074 [2024-12-13 12:34:30.649196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.075 [2024-12-13 12:34:30.649837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.075 [2024-12-13 12:34:30.649845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.649986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.649994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.650131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.650139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f020 is same with the state(6) to be set 00:29:03.076 [2024-12-13 12:34:30.651121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.076 [2024-12-13 12:34:30.651491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.076 [2024-12-13 12:34:30.651497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.651992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.651999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.077 [2024-12-13 12:34:30.652144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.077 [2024-12-13 12:34:30.652153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.652161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.652169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25af770 is same with the state(6) to be set 00:29:03.078 [2024-12-13 12:34:30.653142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-12-13 12:34:30.653800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.078 [2024-12-13 12:34:30.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.653988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.654189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.654197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b0aa0 is same with the state(6) to be set 00:29:03.079 [2024-12-13 12:34:30.655178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-12-13 12:34:30.655372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.079 [2024-12-13 12:34:30.655380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.655987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.656005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.656013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.656021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.080 [2024-12-13 12:34:30.656029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.080 [2024-12-13 12:34:30.656036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.081 [2024-12-13 12:34:30.656212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.081 [2024-12-13 12:34:30.656219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b1d70 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.657178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.657198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.657209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.657220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.657303] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:03.081 [2024-12-13 12:34:30.657316] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:03.081 [2024-12-13 12:34:30.657381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:03.081 task offset: 21632 on job bdev=Nvme10n1 fails 00:29:03.081 00:29:03.081 Latency(us) 00:29:03.081 [2024-12-13T11:34:30.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.081 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme1n1 ended in about 0.73 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme1n1 : 0.73 260.04 16.25 87.59 0.00 181816.02 7833.11 199728.76 00:29:03.081 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme2n1 ended in about 0.75 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme2n1 : 0.75 171.23 10.70 85.62 0.00 241084.71 15042.07 227690.79 00:29:03.081 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme3n1 ended in about 0.73 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme3n1 : 0.73 262.45 16.40 87.48 0.00 172951.41 27962.03 182751.82 00:29:03.081 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme4n1 ended in about 0.73 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme4n1 : 0.73 268.69 16.79 87.73 0.00 165949.33 5742.20 213709.78 00:29:03.081 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme5n1 ended in about 0.75 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme5n1 : 0.75 170.76 10.67 85.38 0.00 226361.86 18225.25 228689.43 00:29:03.081 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme6n1 ended in about 0.75 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme6n1 : 0.75 170.30 10.64 85.15 0.00 221900.15 16602.45 217704.35 00:29:03.081 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme7n1 ended in about 0.75 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme7n1 : 0.75 175.15 10.95 84.92 0.00 212932.61 15354.15 209715.20 00:29:03.081 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme8n1 ended in about 0.76 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme8n1 : 0.76 176.01 11.00 83.37 0.00 208287.13 15791.06 207717.91 00:29:03.081 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme9n1 ended in about 0.76 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme9n1 : 0.76 168.94 10.56 84.47 0.00 208223.09 16602.45 214708.42 00:29:03.081 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.081 Job: Nvme10n1 ended in about 0.73 seconds with error 00:29:03.081 Verification LBA range: start 0x0 length 0x400 00:29:03.081 Nvme10n1 : 0.73 176.45 11.03 88.23 0.00 192142.06 16976.94 231685.36 00:29:03.081 [2024-12-13T11:34:30.781Z] =================================================================================================================== 00:29:03.081 [2024-12-13T11:34:30.781Z] Total : 2000.02 125.00 859.95 0.00 200465.32 5742.20 231685.36 00:29:03.081 [2024-12-13 12:34:30.692048] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:03.081 [2024-12-13 12:34:30.692097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.692358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.081 [2024-12-13 12:34:30.692380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3b0b0 with addr=10.0.0.2, port=4420 00:29:03.081 [2024-12-13 12:34:30.692391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b0b0 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.692558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.081 [2024-12-13 12:34:30.692571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236f970 with addr=10.0.0.2, port=4420 00:29:03.081 [2024-12-13 12:34:30.692579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f970 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.692720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.081 [2024-12-13 12:34:30.692731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e60610 with addr=10.0.0.2, port=4420 00:29:03.081 [2024-12-13 12:34:30.692739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60610 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.692884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.081 [2024-12-13 12:34:30.692897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2368c00 with addr=10.0.0.2, port=4420 00:29:03.081 [2024-12-13 12:34:30.692905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2368c00 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.694239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.694259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.694269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.694278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:03.081 [2024-12-13 12:34:30.694576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.081 [2024-12-13 12:34:30.694592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23646d0 with addr=10.0.0.2, port=4420 00:29:03.081 [2024-12-13 12:34:30.694601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23646d0 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.694730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.081 [2024-12-13 12:34:30.694743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a1510 with addr=10.0.0.2, port=4420 00:29:03.081 [2024-12-13 12:34:30.694751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1510 is same with the state(6) to be set 00:29:03.081 [2024-12-13 12:34:30.694770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b0b0 (9): Bad file descriptor 00:29:03.081 [2024-12-13 12:34:30.694788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236f970 (9): Bad file descriptor 00:29:03.081 [2024-12-13 12:34:30.694798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e60610 (9): Bad file descriptor 00:29:03.081 [2024-12-13 12:34:30.694807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2368c00 (9): Bad file descriptor 00:29:03.081 [2024-12-13 12:34:30.694843] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:03.081 [2024-12-13 12:34:30.694855] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:03.082 [2024-12-13 12:34:30.694866] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:03.082 [2024-12-13 12:34:30.694878] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:03.082 [2024-12-13 12:34:30.695346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-12-13 12:34:30.695366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2390920 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-12-13 12:34:30.695375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390920 is same with the state(6) to be set 00:29:03.082 [2024-12-13 12:34:30.695464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-12-13 12:34:30.695476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f25270 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-12-13 12:34:30.695483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f25270 is same with the state(6) to be set 00:29:03.082 [2024-12-13 12:34:30.695674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-12-13 12:34:30.695686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f33420 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-12-13 12:34:30.695694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f33420 is same with the state(6) to be set 00:29:03.082 [2024-12-13 12:34:30.695819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.082 [2024-12-13 12:34:30.695832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f30c40 with addr=10.0.0.2, port=4420 00:29:03.082 [2024-12-13 12:34:30.695840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30c40 is same with the state(6) to be set 00:29:03.082 [2024-12-13 12:34:30.695851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23646d0 (9): Bad file descriptor 00:29:03.082 [2024-12-13 12:34:30.695860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a1510 (9): Bad file descriptor 00:29:03.082 [2024-12-13 12:34:30.695870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.695876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.695885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.695894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.695903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.695910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.695920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.695927] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.695935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.695941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.695948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.695954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.695962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.695968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.695975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.695981] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.696057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390920 (9): Bad file descriptor 00:29:03.082 [2024-12-13 12:34:30.696069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f25270 (9): Bad file descriptor 00:29:03.082 [2024-12-13 12:34:30.696078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f33420 (9): Bad file descriptor 00:29:03.082 [2024-12-13 12:34:30.696087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f30c40 (9): Bad file descriptor 00:29:03.082 [2024-12-13 12:34:30.696095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.696102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.696109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.696115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.696124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.696130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.696137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.696144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.696169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.696177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.696184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.696192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.696200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.696206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.696213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.696222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.696231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.696238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.696245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.696251] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:03.082 [2024-12-13 12:34:30.696259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:03.082 [2024-12-13 12:34:30.696266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:03.082 [2024-12-13 12:34:30.696273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:03.082 [2024-12-13 12:34:30.696279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:03.341 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 428275 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 428275 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 428275 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:04.721 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.721 rmmod nvme_tcp 00:29:04.721 rmmod nvme_fabrics 00:29:04.721 rmmod nvme_keyring 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 428146 ']' 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 428146 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428146 ']' 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428146 00:29:04.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (428146) - No such process 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 428146 is not found' 00:29:04.721 Process with pid 428146 is not found 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.721 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.629 00:29:06.629 real 0m7.074s 00:29:06.629 user 0m16.163s 00:29:06.629 sys 0m1.277s 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.629 ************************************ 00:29:06.629 END TEST nvmf_shutdown_tc3 00:29:06.629 ************************************ 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:06.629 ************************************ 00:29:06.629 START TEST nvmf_shutdown_tc4 00:29:06.629 ************************************ 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:06.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:06.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:06.629 Found net devices under 0000:af:00.0: cvl_0_0 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.629 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:06.630 Found net devices under 0000:af:00.1: cvl_0_1 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.630 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:29:06.890 00:29:06.890 --- 10.0.0.2 ping statistics --- 00:29:06.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.890 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:29:06.890 00:29:06.890 --- 10.0.0.1 ping statistics --- 00:29:06.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.890 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=429445 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 429445 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 429445 ']' 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.890 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.150 [2024-12-13 12:34:34.627609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:07.150 [2024-12-13 12:34:34.627653] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.150 [2024-12-13 12:34:34.690637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.150 [2024-12-13 12:34:34.712529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.150 [2024-12-13 12:34:34.712571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.150 [2024-12-13 12:34:34.712579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.150 [2024-12-13 12:34:34.712585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.150 [2024-12-13 12:34:34.712589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.150 [2024-12-13 12:34:34.713904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.150 [2024-12-13 12:34:34.714010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.150 [2024-12-13 12:34:34.714124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.150 [2024-12-13 12:34:34.714126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.150 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.409 [2024-12-13 12:34:34.853567] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.409 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.410 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.410 Malloc1 00:29:07.410 [2024-12-13 12:34:34.961532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.410 Malloc2 00:29:07.410 Malloc3 00:29:07.410 Malloc4 00:29:07.410 Malloc5 00:29:07.668 Malloc6 00:29:07.668 Malloc7 00:29:07.668 Malloc8 00:29:07.668 Malloc9 00:29:07.668 Malloc10 00:29:07.668 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.668 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:07.668 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.668 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.927 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=429559 00:29:07.927 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:07.927 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:07.927 [2024-12-13 12:34:35.452959] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 429445 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429445 ']' 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429445 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429445 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429445' 00:29:13.203 killing process with pid 429445 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 429445 00:29:13.203 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 429445 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 [2024-12-13 12:34:40.459451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 [2024-12-13 12:34:40.460288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 [2024-12-13 12:34:40.460526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07a0 is same with tWrite completed with error (sct=0, sc=8) 00:29:13.203 he state(6) to be set 00:29:13.203 starting I/O failed: -6 00:29:13.203 [2024-12-13 12:34:40.460567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07a0 is same with the state(6) to be set 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 [2024-12-13 12:34:40.460575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07a0 is same with tstarting I/O failed: -6 00:29:13.203 he state(6) to be set 00:29:13.203 [2024-12-13 12:34:40.460583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07a0 is same with the state(6) to be set 00:29:13.203 [2024-12-13 12:34:40.460590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07a0 is same with tWrite completed with error (sct=0, sc=8) 00:29:13.203 he state(6) to be set 00:29:13.203 starting I/O failed: -6 00:29:13.203 [2024-12-13 12:34:40.460602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07a0 is same with the state(6) to be set 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.203 starting I/O failed: -6 00:29:13.203 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 [2024-12-13 12:34:40.460923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with tstarting I/O failed: -6 00:29:13.204 he state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.460949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with the state(6) to be set 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 [2024-12-13 12:34:40.460957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.460965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.460971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with tWrite completed with error (sct=0, sc=8) 00:29:13.204 he state(6) to be set 00:29:13.204 starting I/O failed: -6 00:29:13.204 [2024-12-13 12:34:40.460979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.460986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa388e0 is same with the state(6) to be set 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 [2024-12-13 12:34:40.461243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38dd0 is same with the state(6) to be set 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 [2024-12-13 12:34:40.461267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38dd0 is same with tstarting I/O failed: -6 00:29:13.204 he state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.461276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38dd0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.461283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38dd0 is same with tWrite completed with error (sct=0, sc=8) 00:29:13.204 he state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.461294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38dd0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.461301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa38dd0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.461309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 NVMe io qpair process completion error 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 [2024-12-13 12:34:40.461606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb02d0 is same with the state(6) to be set 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 [2024-12-13 12:34:40.461630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb02d0 is same with the state(6) to be set 00:29:13.204 starting I/O failed: -6 00:29:13.204 [2024-12-13 12:34:40.461638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb02d0 is same with the state(6) to be set 00:29:13.204 [2024-12-13 12:34:40.461645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb02d0 is same with the state(6) to be set 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 [2024-12-13 12:34:40.461651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb02d0 is same with the state(6) to be set 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 [2024-12-13 12:34:40.462175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 starting I/O failed: -6 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.204 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 [2024-12-13 12:34:40.463050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 [2024-12-13 12:34:40.464081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 Write completed with error (sct=0, sc=8) 00:29:13.205 starting I/O failed: -6 00:29:13.205 [2024-12-13 12:34:40.465795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.206 NVMe io qpair process completion error 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.469644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.469807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with tstarting I/O failed: -6 00:29:13.206 he state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.469831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with tWrite completed with error (sct=0, sc=8) 00:29:13.206 he state(6) to be set 00:29:13.206 starting I/O failed: -6 00:29:13.206 [2024-12-13 12:34:40.469840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.469849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.469855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.469862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.469868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.469879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d460 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with the state(6) to be set 00:29:13.206 starting I/O failed: -6 00:29:13.206 [2024-12-13 12:34:40.470416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with tstarting I/O failed: -6 00:29:13.206 he state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3d930 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 [2024-12-13 12:34:40.470684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 starting I/O failed: -6 00:29:13.206 [2024-12-13 12:34:40.470726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with tWrite completed with error (sct=0, sc=8) 00:29:13.206 he state(6) to be set 00:29:13.206 starting I/O failed: -6 00:29:13.206 [2024-12-13 12:34:40.470746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 [2024-12-13 12:34:40.470766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with tstarting I/O failed: -6 00:29:13.206 he state(6) to be set 00:29:13.206 [2024-12-13 12:34:40.470773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3caa0 is same with the state(6) to be set 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 Write completed with error (sct=0, sc=8) 00:29:13.206 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 [2024-12-13 12:34:40.471530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 [2024-12-13 12:34:40.473031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.207 NVMe io qpair process completion error 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 starting I/O failed: -6 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.207 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 [2024-12-13 12:34:40.474004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.208 starting I/O failed: -6 00:29:13.208 starting I/O failed: -6 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 [2024-12-13 12:34:40.474887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 [2024-12-13 12:34:40.475900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.208 Write completed with error (sct=0, sc=8) 00:29:13.208 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 [2024-12-13 12:34:40.477856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.209 NVMe io qpair process completion error 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 [2024-12-13 12:34:40.478897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 starting I/O failed: -6 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.209 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 [2024-12-13 12:34:40.479773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 [2024-12-13 12:34:40.480773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 [2024-12-13 12:34:40.482695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.210 NVMe io qpair process completion error 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 starting I/O failed: -6 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.210 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 [2024-12-13 12:34:40.483724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 [2024-12-13 12:34:40.484628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 [2024-12-13 12:34:40.485614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.211 starting I/O failed: -6 00:29:13.211 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 [2024-12-13 12:34:40.491650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.212 NVMe io qpair process completion error 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 [2024-12-13 12:34:40.492663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.212 starting I/O failed: -6 00:29:13.212 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 [2024-12-13 12:34:40.493577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 [2024-12-13 12:34:40.494565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.213 starting I/O failed: -6 00:29:13.213 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 [2024-12-13 12:34:40.496339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.214 NVMe io qpair process completion error 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 [2024-12-13 12:34:40.497322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 [2024-12-13 12:34:40.498212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.214 starting I/O failed: -6 00:29:13.214 Write completed with error (sct=0, sc=8) 00:29:13.215 [2024-12-13 12:34:40.499202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 [2024-12-13 12:34:40.500763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.215 NVMe io qpair process completion error 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 [2024-12-13 12:34:40.501824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.215 Write completed with error (sct=0, sc=8) 00:29:13.215 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 [2024-12-13 12:34:40.502697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 [2024-12-13 12:34:40.503698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.216 Write completed with error (sct=0, sc=8) 00:29:13.216 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 [2024-12-13 12:34:40.509561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.217 NVMe io qpair process completion error 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 [2024-12-13 12:34:40.516513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.217 starting I/O failed: -6 00:29:13.217 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 [2024-12-13 12:34:40.517717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.218 starting I/O failed: -6 00:29:13.218 starting I/O failed: -6 00:29:13.218 starting I/O failed: -6 00:29:13.218 starting I/O failed: -6 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 Write completed with error (sct=0, sc=8) 00:29:13.218 starting I/O failed: -6 00:29:13.218 [2024-12-13 12:34:40.519729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.218 NVMe io qpair process completion error 00:29:13.218 Initializing NVMe Controllers 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:13.218 Controller IO queue size 128, less than required. 00:29:13.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:13.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:13.219 Initialization complete. Launching workers. 00:29:13.219 ======================================================== 00:29:13.219 Latency(us) 00:29:13.219 Device Information : IOPS MiB/s Average min max 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2180.46 93.69 58707.55 682.41 110856.37 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2173.10 93.38 58919.97 913.68 112954.45 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2202.57 94.64 57856.50 858.49 104768.09 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2222.15 95.48 57050.40 894.14 103444.90 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2203.41 94.68 58146.65 1079.74 102820.51 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2227.83 95.73 56907.57 690.11 101644.75 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2232.89 95.94 56788.69 671.42 100647.10 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2241.52 96.32 56583.20 682.42 99906.03 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2185.52 93.91 58048.51 653.06 101603.02 00:29:13.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2174.78 93.45 58399.44 693.47 108423.24 00:29:13.219 ======================================================== 00:29:13.219 Total : 22044.23 947.21 57732.33 653.06 112954.45 00:29:13.219 00:29:13.219 [2024-12-13 12:34:40.522804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173f6d0 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.522858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17406c0 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.522890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173f070 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.522922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173f3a0 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.522952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cbff0 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.522983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fa00 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.523013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fd30 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.523042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d0f00 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.523077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1740060 is same with the state(6) to be set 00:29:13.219 [2024-12-13 12:34:40.523107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1740390 is same with the state(6) to be set 00:29:13.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:13.219 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 429559 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 429559 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 429559 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.157 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.157 rmmod nvme_tcp 00:29:14.416 rmmod nvme_fabrics 00:29:14.416 rmmod nvme_keyring 00:29:14.416 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.416 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:14.416 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:14.416 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 429445 ']' 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 429445 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429445 ']' 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429445 00:29:14.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (429445) - No such process 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 429445 is not found' 00:29:14.417 Process with pid 429445 is not found 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.417 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.320 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.320 00:29:16.320 real 0m9.759s 00:29:16.320 user 0m24.959s 00:29:16.320 sys 0m5.066s 00:29:16.320 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.320 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:16.320 ************************************ 00:29:16.320 END TEST nvmf_shutdown_tc4 00:29:16.320 ************************************ 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:16.579 00:29:16.579 real 0m39.898s 00:29:16.579 user 1m37.297s 00:29:16.579 sys 0m13.684s 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:16.579 ************************************ 00:29:16.579 END TEST nvmf_shutdown 00:29:16.579 ************************************ 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:16.579 ************************************ 00:29:16.579 START TEST nvmf_nsid 00:29:16.579 ************************************ 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:16.579 * Looking for test storage... 00:29:16.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:16.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.580 --rc genhtml_branch_coverage=1 00:29:16.580 --rc genhtml_function_coverage=1 00:29:16.580 --rc genhtml_legend=1 00:29:16.580 --rc geninfo_all_blocks=1 00:29:16.580 --rc geninfo_unexecuted_blocks=1 00:29:16.580 00:29:16.580 ' 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:16.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.580 --rc genhtml_branch_coverage=1 00:29:16.580 --rc genhtml_function_coverage=1 00:29:16.580 --rc genhtml_legend=1 00:29:16.580 --rc geninfo_all_blocks=1 00:29:16.580 --rc geninfo_unexecuted_blocks=1 00:29:16.580 00:29:16.580 ' 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:16.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.580 --rc genhtml_branch_coverage=1 00:29:16.580 --rc genhtml_function_coverage=1 00:29:16.580 --rc genhtml_legend=1 00:29:16.580 --rc geninfo_all_blocks=1 00:29:16.580 --rc geninfo_unexecuted_blocks=1 00:29:16.580 00:29:16.580 ' 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:16.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.580 --rc genhtml_branch_coverage=1 00:29:16.580 --rc genhtml_function_coverage=1 00:29:16.580 --rc genhtml_legend=1 00:29:16.580 --rc geninfo_all_blocks=1 00:29:16.580 --rc geninfo_unexecuted_blocks=1 00:29:16.580 00:29:16.580 ' 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.580 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.840 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.114 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:22.115 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:22.115 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:22.115 Found net devices under 0000:af:00.0: cvl_0_0 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:22.115 Found net devices under 0000:af:00.1: cvl_0_1 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.115 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:29:22.375 00:29:22.375 --- 10.0.0.2 ping statistics --- 00:29:22.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.375 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:29:22.375 00:29:22.375 --- 10.0.0.1 ping statistics --- 00:29:22.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.375 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=433929 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 433929 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 433929 ']' 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.375 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.375 [2024-12-13 12:34:50.017406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:22.375 [2024-12-13 12:34:50.017457] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.634 [2024-12-13 12:34:50.100062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.635 [2024-12-13 12:34:50.123167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.635 [2024-12-13 12:34:50.123200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.635 [2024-12-13 12:34:50.123209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.635 [2024-12-13 12:34:50.123217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.635 [2024-12-13 12:34:50.123223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.635 [2024-12-13 12:34:50.123695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=434048 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d00d4487-e450-4606-827b-0f1465adc21b 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4ecf6731-5c06-4e1b-9e4f-b119afa033ad 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=db7cbca4-d5b2-4049-b93a-2655a83790b1 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.635 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.635 null0 00:29:22.635 null1 00:29:22.635 [2024-12-13 12:34:50.311494] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:22.635 [2024-12-13 12:34:50.311537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434048 ] 00:29:22.635 null2 00:29:22.635 [2024-12-13 12:34:50.319443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.894 [2024-12-13 12:34:50.343630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.894 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.894 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 434048 /var/tmp/tgt2.sock 00:29:22.894 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434048 ']' 00:29:22.894 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:22.894 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.894 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:22.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:22.895 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.895 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.895 [2024-12-13 12:34:50.384896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.895 [2024-12-13 12:34:50.407870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.154 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.154 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:23.154 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:23.413 [2024-12-13 12:34:50.916300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.413 [2024-12-13 12:34:50.932386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:23.413 nvme0n1 nvme0n2 00:29:23.413 nvme1n1 00:29:23.413 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:23.413 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:23.413 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:24.350 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:24.609 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:24.609 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d00d4487-e450-4606-827b-0f1465adc21b 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d00d4487e4504606827b0f1465adc21b 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D00D4487E4504606827B0F1465ADC21B 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D00D4487E4504606827B0F1465ADC21B == \D\0\0\D\4\4\8\7\E\4\5\0\4\6\0\6\8\2\7\B\0\F\1\4\6\5\A\D\C\2\1\B ]] 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4ecf6731-5c06-4e1b-9e4f-b119afa033ad 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4ecf67315c064e1b9e4fb119afa033ad 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4ECF67315C064E1B9E4FB119AFA033AD 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4ECF67315C064E1B9E4FB119AFA033AD == \4\E\C\F\6\7\3\1\5\C\0\6\4\E\1\B\9\E\4\F\B\1\1\9\A\F\A\0\3\3\A\D ]] 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid db7cbca4-d5b2-4049-b93a-2655a83790b1 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:25.547 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=db7cbca4d5b24049b93a2655a83790b1 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DB7CBCA4D5B24049B93A2655A83790B1 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DB7CBCA4D5B24049B93A2655A83790B1 == \D\B\7\C\B\C\A\4\D\5\B\2\4\0\4\9\B\9\3\A\2\6\5\5\A\8\3\7\9\0\B\1 ]] 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 434048 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434048 ']' 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434048 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.807 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434048 00:29:26.066 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:26.066 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:26.066 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434048' 00:29:26.066 killing process with pid 434048 00:29:26.066 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434048 00:29:26.066 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434048 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.326 rmmod nvme_tcp 00:29:26.326 rmmod nvme_fabrics 00:29:26.326 rmmod nvme_keyring 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 433929 ']' 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 433929 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 433929 ']' 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 433929 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433929 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433929' 00:29:26.326 killing process with pid 433929 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 433929 00:29:26.326 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 433929 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.585 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.493 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.493 00:29:28.493 real 0m12.075s 00:29:28.493 user 0m9.548s 00:29:28.493 sys 0m5.277s 00:29:28.493 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.493 12:34:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:28.493 ************************************ 00:29:28.493 END TEST nvmf_nsid 00:29:28.493 ************************************ 00:29:28.753 12:34:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:28.753 00:29:28.753 real 18m36.024s 00:29:28.753 user 49m10.936s 00:29:28.753 sys 4m39.858s 00:29:28.753 12:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.753 12:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:28.753 ************************************ 00:29:28.753 END TEST nvmf_target_extra 00:29:28.753 ************************************ 00:29:28.753 12:34:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:28.753 12:34:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.753 12:34:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.753 12:34:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.753 ************************************ 00:29:28.753 START TEST nvmf_host 00:29:28.753 ************************************ 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:28.753 * Looking for test storage... 00:29:28.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:28.753 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.013 --rc genhtml_branch_coverage=1 00:29:29.013 --rc genhtml_function_coverage=1 00:29:29.013 --rc genhtml_legend=1 00:29:29.013 --rc geninfo_all_blocks=1 00:29:29.013 --rc geninfo_unexecuted_blocks=1 00:29:29.013 00:29:29.013 ' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.013 --rc genhtml_branch_coverage=1 00:29:29.013 --rc genhtml_function_coverage=1 00:29:29.013 --rc genhtml_legend=1 00:29:29.013 --rc geninfo_all_blocks=1 00:29:29.013 --rc geninfo_unexecuted_blocks=1 00:29:29.013 00:29:29.013 ' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.013 --rc genhtml_branch_coverage=1 00:29:29.013 --rc genhtml_function_coverage=1 00:29:29.013 --rc genhtml_legend=1 00:29:29.013 --rc geninfo_all_blocks=1 00:29:29.013 --rc geninfo_unexecuted_blocks=1 00:29:29.013 00:29:29.013 ' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.013 --rc genhtml_branch_coverage=1 00:29:29.013 --rc genhtml_function_coverage=1 00:29:29.013 --rc genhtml_legend=1 00:29:29.013 --rc geninfo_all_blocks=1 00:29:29.013 --rc geninfo_unexecuted_blocks=1 00:29:29.013 00:29:29.013 ' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.013 12:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.013 ************************************ 00:29:29.013 START TEST nvmf_multicontroller 00:29:29.013 ************************************ 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:29.014 * Looking for test storage... 00:29:29.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.014 --rc genhtml_branch_coverage=1 00:29:29.014 --rc genhtml_function_coverage=1 00:29:29.014 --rc genhtml_legend=1 00:29:29.014 --rc geninfo_all_blocks=1 00:29:29.014 --rc geninfo_unexecuted_blocks=1 00:29:29.014 00:29:29.014 ' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.014 --rc genhtml_branch_coverage=1 00:29:29.014 --rc genhtml_function_coverage=1 00:29:29.014 --rc genhtml_legend=1 00:29:29.014 --rc geninfo_all_blocks=1 00:29:29.014 --rc geninfo_unexecuted_blocks=1 00:29:29.014 00:29:29.014 ' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.014 --rc genhtml_branch_coverage=1 00:29:29.014 --rc genhtml_function_coverage=1 00:29:29.014 --rc genhtml_legend=1 00:29:29.014 --rc geninfo_all_blocks=1 00:29:29.014 --rc geninfo_unexecuted_blocks=1 00:29:29.014 00:29:29.014 ' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.014 --rc genhtml_branch_coverage=1 00:29:29.014 --rc genhtml_function_coverage=1 00:29:29.014 --rc genhtml_legend=1 00:29:29.014 --rc geninfo_all_blocks=1 00:29:29.014 --rc geninfo_unexecuted_blocks=1 00:29:29.014 00:29:29.014 ' 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.014 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.274 12:34:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:35.845 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:35.845 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:35.845 Found net devices under 0000:af:00.0: cvl_0_0 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:35.845 Found net devices under 0000:af:00.1: cvl_0_1 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.845 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:29:35.846 00:29:35.846 --- 10.0.0.2 ping statistics --- 00:29:35.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.846 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:35.846 00:29:35.846 --- 10.0.0.1 ping statistics --- 00:29:35.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.846 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=438188 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 438188 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438188 ']' 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 [2024-12-13 12:35:02.662883] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:35.846 [2024-12-13 12:35:02.662934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.846 [2024-12-13 12:35:02.739283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:35.846 [2024-12-13 12:35:02.762385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.846 [2024-12-13 12:35:02.762422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.846 [2024-12-13 12:35:02.762429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.846 [2024-12-13 12:35:02.762435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.846 [2024-12-13 12:35:02.762440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.846 [2024-12-13 12:35:02.763685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.846 [2024-12-13 12:35:02.763812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.846 [2024-12-13 12:35:02.763813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 [2024-12-13 12:35:02.902800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 Malloc0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 [2024-12-13 12:35:02.970382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 [2024-12-13 12:35:02.978315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 Malloc1 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.846 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=438227 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 438227 /var/tmp/bdevperf.sock 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438227 ']' 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 NVMe0n1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.847 1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 request: 00:29:35.847 { 00:29:35.847 "name": "NVMe0", 00:29:35.847 "trtype": "tcp", 00:29:35.847 "traddr": "10.0.0.2", 00:29:35.847 "adrfam": "ipv4", 00:29:35.847 "trsvcid": "4420", 00:29:35.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.847 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:35.847 "hostaddr": "10.0.0.1", 00:29:35.847 "prchk_reftag": false, 00:29:35.847 "prchk_guard": false, 00:29:35.847 "hdgst": false, 00:29:35.847 "ddgst": false, 00:29:35.847 "allow_unrecognized_csi": false, 00:29:35.847 "method": "bdev_nvme_attach_controller", 00:29:35.847 "req_id": 1 00:29:35.847 } 00:29:35.847 Got JSON-RPC error response 00:29:35.847 response: 00:29:35.847 { 00:29:35.847 "code": -114, 00:29:35.847 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:35.847 } 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 request: 00:29:35.847 { 00:29:35.847 "name": "NVMe0", 00:29:35.847 "trtype": "tcp", 00:29:35.847 "traddr": "10.0.0.2", 00:29:35.847 "adrfam": "ipv4", 00:29:35.847 "trsvcid": "4420", 00:29:35.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.847 "hostaddr": "10.0.0.1", 00:29:35.847 "prchk_reftag": false, 00:29:35.847 "prchk_guard": false, 00:29:35.847 "hdgst": false, 00:29:35.847 "ddgst": false, 00:29:35.847 "allow_unrecognized_csi": false, 00:29:35.847 "method": "bdev_nvme_attach_controller", 00:29:35.847 "req_id": 1 00:29:35.847 } 00:29:35.847 Got JSON-RPC error response 00:29:35.847 response: 00:29:35.847 { 00:29:35.847 "code": -114, 00:29:35.847 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:35.847 } 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.847 request: 00:29:35.847 { 00:29:35.847 "name": "NVMe0", 00:29:35.847 "trtype": "tcp", 00:29:35.847 "traddr": "10.0.0.2", 00:29:35.847 "adrfam": "ipv4", 00:29:35.847 "trsvcid": "4420", 00:29:35.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.847 "hostaddr": "10.0.0.1", 00:29:35.847 "prchk_reftag": false, 00:29:35.847 "prchk_guard": false, 00:29:35.847 "hdgst": false, 00:29:35.847 "ddgst": false, 00:29:35.847 "multipath": "disable", 00:29:35.847 "allow_unrecognized_csi": false, 00:29:35.847 "method": "bdev_nvme_attach_controller", 00:29:35.847 "req_id": 1 00:29:35.847 } 00:29:35.847 Got JSON-RPC error response 00:29:35.847 response: 00:29:35.847 { 00:29:35.847 "code": -114, 00:29:35.847 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:35.847 } 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.847 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.107 request: 00:29:36.107 { 00:29:36.107 "name": "NVMe0", 00:29:36.107 "trtype": "tcp", 00:29:36.107 "traddr": "10.0.0.2", 00:29:36.107 "adrfam": "ipv4", 00:29:36.107 "trsvcid": "4420", 00:29:36.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.107 "hostaddr": "10.0.0.1", 00:29:36.107 "prchk_reftag": false, 00:29:36.107 "prchk_guard": false, 00:29:36.107 "hdgst": false, 00:29:36.107 "ddgst": false, 00:29:36.107 "multipath": "failover", 00:29:36.107 "allow_unrecognized_csi": false, 00:29:36.107 "method": "bdev_nvme_attach_controller", 00:29:36.107 "req_id": 1 00:29:36.107 } 00:29:36.107 Got JSON-RPC error response 00:29:36.107 response: 00:29:36.107 { 00:29:36.107 "code": -114, 00:29:36.107 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.107 } 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.107 NVMe0n1 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.107 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:36.107 12:35:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:37.486 { 00:29:37.486 "results": [ 00:29:37.486 { 00:29:37.486 "job": "NVMe0n1", 00:29:37.486 "core_mask": "0x1", 00:29:37.486 "workload": "write", 00:29:37.486 "status": "finished", 00:29:37.486 "queue_depth": 128, 00:29:37.486 "io_size": 4096, 00:29:37.486 "runtime": 1.005445, 00:29:37.486 "iops": 25276.370164454544, 00:29:37.486 "mibps": 98.73582095490056, 00:29:37.486 "io_failed": 0, 00:29:37.486 "io_timeout": 0, 00:29:37.486 "avg_latency_us": 5052.192900950732, 00:29:37.486 "min_latency_us": 2980.327619047619, 00:29:37.486 "max_latency_us": 15291.733333333334 00:29:37.486 } 00:29:37.486 ], 00:29:37.486 "core_count": 1 00:29:37.486 } 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 438227 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438227 ']' 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438227 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438227 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438227' 00:29:37.486 killing process with pid 438227 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438227 00:29:37.486 12:35:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438227 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:37.486 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:37.486 [2024-12-13 12:35:03.081965] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:37.486 [2024-12-13 12:35:03.082012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438227 ] 00:29:37.486 [2024-12-13 12:35:03.157458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.486 [2024-12-13 12:35:03.180406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.486 [2024-12-13 12:35:03.753076] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 73b3a351-9d6f-4706-a212-4dde1ecd4247 already exists 00:29:37.486 [2024-12-13 12:35:03.753105] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:73b3a351-9d6f-4706-a212-4dde1ecd4247 alias for bdev NVMe1n1 00:29:37.486 [2024-12-13 12:35:03.753112] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:37.486 Running I/O for 1 seconds... 00:29:37.486 25223.00 IOPS, 98.53 MiB/s 00:29:37.486 Latency(us) 00:29:37.486 [2024-12-13T11:35:05.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.486 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:37.486 NVMe0n1 : 1.01 25276.37 98.74 0.00 0.00 5052.19 2980.33 15291.73 00:29:37.486 [2024-12-13T11:35:05.186Z] =================================================================================================================== 00:29:37.486 [2024-12-13T11:35:05.186Z] Total : 25276.37 98.74 0.00 0.00 5052.19 2980.33 15291.73 00:29:37.486 Received shutdown signal, test time was about 1.000000 seconds 00:29:37.486 00:29:37.486 Latency(us) 00:29:37.486 [2024-12-13T11:35:05.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.486 [2024-12-13T11:35:05.186Z] =================================================================================================================== 00:29:37.486 [2024-12-13T11:35:05.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.486 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.486 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.486 rmmod nvme_tcp 00:29:37.745 rmmod nvme_fabrics 00:29:37.745 rmmod nvme_keyring 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 438188 ']' 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 438188 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438188 ']' 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438188 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438188 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438188' 00:29:37.745 killing process with pid 438188 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438188 00:29:37.745 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438188 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.004 12:35:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.908 12:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.908 00:29:39.908 real 0m11.049s 00:29:39.908 user 0m12.112s 00:29:39.908 sys 0m5.068s 00:29:39.908 12:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.908 12:35:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:39.908 ************************************ 00:29:39.908 END TEST nvmf_multicontroller 00:29:39.908 ************************************ 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.169 ************************************ 00:29:40.169 START TEST nvmf_aer 00:29:40.169 ************************************ 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:40.169 * Looking for test storage... 00:29:40.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.169 --rc genhtml_branch_coverage=1 00:29:40.169 --rc genhtml_function_coverage=1 00:29:40.169 --rc genhtml_legend=1 00:29:40.169 --rc geninfo_all_blocks=1 00:29:40.169 --rc geninfo_unexecuted_blocks=1 00:29:40.169 00:29:40.169 ' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.169 --rc genhtml_branch_coverage=1 00:29:40.169 --rc genhtml_function_coverage=1 00:29:40.169 --rc genhtml_legend=1 00:29:40.169 --rc geninfo_all_blocks=1 00:29:40.169 --rc geninfo_unexecuted_blocks=1 00:29:40.169 00:29:40.169 ' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.169 --rc genhtml_branch_coverage=1 00:29:40.169 --rc genhtml_function_coverage=1 00:29:40.169 --rc genhtml_legend=1 00:29:40.169 --rc geninfo_all_blocks=1 00:29:40.169 --rc geninfo_unexecuted_blocks=1 00:29:40.169 00:29:40.169 ' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.169 --rc genhtml_branch_coverage=1 00:29:40.169 --rc genhtml_function_coverage=1 00:29:40.169 --rc genhtml_legend=1 00:29:40.169 --rc geninfo_all_blocks=1 00:29:40.169 --rc geninfo_unexecuted_blocks=1 00:29:40.169 00:29:40.169 ' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.169 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.170 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:40.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.429 12:35:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.000 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.001 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.001 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:29:47.001 00:29:47.001 --- 10.0.0.2 ping statistics --- 00:29:47.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.001 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:29:47.001 00:29:47.001 --- 10.0.0.1 ping statistics --- 00:29:47.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.001 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=442133 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 442133 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 442133 ']' 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.001 12:35:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.001 [2024-12-13 12:35:13.931796] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:47.001 [2024-12-13 12:35:13.931842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.001 [2024-12-13 12:35:14.012644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.001 [2024-12-13 12:35:14.036635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.001 [2024-12-13 12:35:14.036670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.001 [2024-12-13 12:35:14.036678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.001 [2024-12-13 12:35:14.036684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.001 [2024-12-13 12:35:14.036690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.001 [2024-12-13 12:35:14.038141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.001 [2024-12-13 12:35:14.038165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.001 [2024-12-13 12:35:14.038209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.001 [2024-12-13 12:35:14.038210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.001 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.001 [2024-12-13 12:35:14.178040] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 Malloc0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 [2024-12-13 12:35:14.239253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 [ 00:29:47.002 { 00:29:47.002 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.002 "subtype": "Discovery", 00:29:47.002 "listen_addresses": [], 00:29:47.002 "allow_any_host": true, 00:29:47.002 "hosts": [] 00:29:47.002 }, 00:29:47.002 { 00:29:47.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.002 "subtype": "NVMe", 00:29:47.002 "listen_addresses": [ 00:29:47.002 { 00:29:47.002 "trtype": "TCP", 00:29:47.002 "adrfam": "IPv4", 00:29:47.002 "traddr": "10.0.0.2", 00:29:47.002 "trsvcid": "4420" 00:29:47.002 } 00:29:47.002 ], 00:29:47.002 "allow_any_host": true, 00:29:47.002 "hosts": [], 00:29:47.002 "serial_number": "SPDK00000000000001", 00:29:47.002 "model_number": "SPDK bdev Controller", 00:29:47.002 "max_namespaces": 2, 00:29:47.002 "min_cntlid": 1, 00:29:47.002 "max_cntlid": 65519, 00:29:47.002 "namespaces": [ 00:29:47.002 { 00:29:47.002 "nsid": 1, 00:29:47.002 "bdev_name": "Malloc0", 00:29:47.002 "name": "Malloc0", 00:29:47.002 "nguid": "C56B1A2B31394446ABC32AC2345D1C8D", 00:29:47.002 "uuid": "c56b1a2b-3139-4446-abc3-2ac2345d1c8d" 00:29:47.002 } 00:29:47.002 ] 00:29:47.002 } 00:29:47.002 ] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=442169 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 Malloc1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 Asynchronous Event Request test 00:29:47.002 Attaching to 10.0.0.2 00:29:47.002 Attached to 10.0.0.2 00:29:47.002 Registering asynchronous event callbacks... 00:29:47.002 Starting namespace attribute notice tests for all controllers... 00:29:47.002 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:47.002 aer_cb - Changed Namespace 00:29:47.002 Cleaning up... 00:29:47.002 [ 00:29:47.002 { 00:29:47.002 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.002 "subtype": "Discovery", 00:29:47.002 "listen_addresses": [], 00:29:47.002 "allow_any_host": true, 00:29:47.002 "hosts": [] 00:29:47.002 }, 00:29:47.002 { 00:29:47.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.002 "subtype": "NVMe", 00:29:47.002 "listen_addresses": [ 00:29:47.002 { 00:29:47.002 "trtype": "TCP", 00:29:47.002 "adrfam": "IPv4", 00:29:47.002 "traddr": "10.0.0.2", 00:29:47.002 "trsvcid": "4420" 00:29:47.002 } 00:29:47.002 ], 00:29:47.002 "allow_any_host": true, 00:29:47.002 "hosts": [], 00:29:47.002 "serial_number": "SPDK00000000000001", 00:29:47.002 "model_number": "SPDK bdev Controller", 00:29:47.002 "max_namespaces": 2, 00:29:47.002 "min_cntlid": 1, 00:29:47.002 "max_cntlid": 65519, 00:29:47.002 "namespaces": [ 00:29:47.002 { 00:29:47.002 "nsid": 1, 00:29:47.002 "bdev_name": "Malloc0", 00:29:47.002 "name": "Malloc0", 00:29:47.002 "nguid": "C56B1A2B31394446ABC32AC2345D1C8D", 00:29:47.002 "uuid": "c56b1a2b-3139-4446-abc3-2ac2345d1c8d" 00:29:47.002 }, 00:29:47.002 { 00:29:47.002 "nsid": 2, 00:29:47.002 "bdev_name": "Malloc1", 00:29:47.002 "name": "Malloc1", 00:29:47.002 "nguid": "D85861AACB564710AB848F564C48E248", 00:29:47.002 "uuid": "d85861aa-cb56-4710-ab84-8f564c48e248" 00:29:47.002 } 00:29:47.002 ] 00:29:47.002 } 00:29:47.002 ] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 442169 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.002 rmmod nvme_tcp 00:29:47.002 rmmod nvme_fabrics 00:29:47.002 rmmod nvme_keyring 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:47.002 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 442133 ']' 00:29:47.003 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 442133 00:29:47.003 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 442133 ']' 00:29:47.003 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 442133 00:29:47.003 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:47.003 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.003 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442133 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442133' 00:29:47.262 killing process with pid 442133 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 442133 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 442133 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.262 12:35:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.798 00:29:49.798 real 0m9.279s 00:29:49.798 user 0m5.020s 00:29:49.798 sys 0m4.868s 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.798 ************************************ 00:29:49.798 END TEST nvmf_aer 00:29:49.798 ************************************ 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.798 12:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.798 ************************************ 00:29:49.798 START TEST nvmf_async_init 00:29:49.798 ************************************ 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:49.798 * Looking for test storage... 00:29:49.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:49.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.798 --rc genhtml_branch_coverage=1 00:29:49.798 --rc genhtml_function_coverage=1 00:29:49.798 --rc genhtml_legend=1 00:29:49.798 --rc geninfo_all_blocks=1 00:29:49.798 --rc geninfo_unexecuted_blocks=1 00:29:49.798 00:29:49.798 ' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:49.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.798 --rc genhtml_branch_coverage=1 00:29:49.798 --rc genhtml_function_coverage=1 00:29:49.798 --rc genhtml_legend=1 00:29:49.798 --rc geninfo_all_blocks=1 00:29:49.798 --rc geninfo_unexecuted_blocks=1 00:29:49.798 00:29:49.798 ' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:49.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.798 --rc genhtml_branch_coverage=1 00:29:49.798 --rc genhtml_function_coverage=1 00:29:49.798 --rc genhtml_legend=1 00:29:49.798 --rc geninfo_all_blocks=1 00:29:49.798 --rc geninfo_unexecuted_blocks=1 00:29:49.798 00:29:49.798 ' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:49.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.798 --rc genhtml_branch_coverage=1 00:29:49.798 --rc genhtml_function_coverage=1 00:29:49.798 --rc genhtml_legend=1 00:29:49.798 --rc geninfo_all_blocks=1 00:29:49.798 --rc geninfo_unexecuted_blocks=1 00:29:49.798 00:29:49.798 ' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.798 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=710b7c8d022849e38956639600c5edae 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.799 12:35:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:56.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:56.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:56.370 Found net devices under 0000:af:00.0: cvl_0_0 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.370 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:56.371 Found net devices under 0000:af:00.1: cvl_0_1 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.371 12:35:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:29:56.371 00:29:56.371 --- 10.0.0.2 ping statistics --- 00:29:56.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.371 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:29:56.371 00:29:56.371 --- 10.0.0.1 ping statistics --- 00:29:56.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.371 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=445640 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 445640 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 445640 ']' 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 [2024-12-13 12:35:23.182951] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:56.371 [2024-12-13 12:35:23.182993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.371 [2024-12-13 12:35:23.258132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.371 [2024-12-13 12:35:23.279729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.371 [2024-12-13 12:35:23.279763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.371 [2024-12-13 12:35:23.279771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.371 [2024-12-13 12:35:23.279777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.371 [2024-12-13 12:35:23.279801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.371 [2024-12-13 12:35:23.280290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 [2024-12-13 12:35:23.414949] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 null0 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 710b7c8d022849e38956639600c5edae 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 [2024-12-13 12:35:23.459178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 nvme0n1 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.371 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 [ 00:29:56.371 { 00:29:56.371 "name": "nvme0n1", 00:29:56.371 "aliases": [ 00:29:56.371 "710b7c8d-0228-49e3-8956-639600c5edae" 00:29:56.371 ], 00:29:56.372 "product_name": "NVMe disk", 00:29:56.372 "block_size": 512, 00:29:56.372 "num_blocks": 2097152, 00:29:56.372 "uuid": "710b7c8d-0228-49e3-8956-639600c5edae", 00:29:56.372 "numa_id": 1, 00:29:56.372 "assigned_rate_limits": { 00:29:56.372 "rw_ios_per_sec": 0, 00:29:56.372 "rw_mbytes_per_sec": 0, 00:29:56.372 "r_mbytes_per_sec": 0, 00:29:56.372 "w_mbytes_per_sec": 0 00:29:56.372 }, 00:29:56.372 "claimed": false, 00:29:56.372 "zoned": false, 00:29:56.372 "supported_io_types": { 00:29:56.372 "read": true, 00:29:56.372 "write": true, 00:29:56.372 "unmap": false, 00:29:56.372 "flush": true, 00:29:56.372 "reset": true, 00:29:56.372 "nvme_admin": true, 00:29:56.372 "nvme_io": true, 00:29:56.372 "nvme_io_md": false, 00:29:56.372 "write_zeroes": true, 00:29:56.372 "zcopy": false, 00:29:56.372 "get_zone_info": false, 00:29:56.372 "zone_management": false, 00:29:56.372 "zone_append": false, 00:29:56.372 "compare": true, 00:29:56.372 "compare_and_write": true, 00:29:56.372 "abort": true, 00:29:56.372 "seek_hole": false, 00:29:56.372 "seek_data": false, 00:29:56.372 "copy": true, 00:29:56.372 "nvme_iov_md": false 00:29:56.372 }, 00:29:56.372 "memory_domains": [ 00:29:56.372 { 00:29:56.372 "dma_device_id": "system", 00:29:56.372 "dma_device_type": 1 00:29:56.372 } 00:29:56.372 ], 00:29:56.372 "driver_specific": { 00:29:56.372 "nvme": [ 00:29:56.372 { 00:29:56.372 "trid": { 00:29:56.372 "trtype": "TCP", 00:29:56.372 "adrfam": "IPv4", 00:29:56.372 "traddr": "10.0.0.2", 00:29:56.372 "trsvcid": "4420", 00:29:56.372 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:56.372 }, 00:29:56.372 "ctrlr_data": { 00:29:56.372 "cntlid": 1, 00:29:56.372 "vendor_id": "0x8086", 00:29:56.372 "model_number": "SPDK bdev Controller", 00:29:56.372 "serial_number": "00000000000000000000", 00:29:56.372 "firmware_revision": "25.01", 00:29:56.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.372 "oacs": { 00:29:56.372 "security": 0, 00:29:56.372 "format": 0, 00:29:56.372 "firmware": 0, 00:29:56.372 "ns_manage": 0 00:29:56.372 }, 00:29:56.372 "multi_ctrlr": true, 00:29:56.372 "ana_reporting": false 00:29:56.372 }, 00:29:56.372 "vs": { 00:29:56.372 "nvme_version": "1.3" 00:29:56.372 }, 00:29:56.372 "ns_data": { 00:29:56.372 "id": 1, 00:29:56.372 "can_share": true 00:29:56.372 } 00:29:56.372 } 00:29:56.372 ], 00:29:56.372 "mp_policy": "active_passive" 00:29:56.372 } 00:29:56.372 } 00:29:56.372 ] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 [2024-12-13 12:35:23.724733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:56.372 [2024-12-13 12:35:23.724792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163e230 (9): Bad file descriptor 00:29:56.372 [2024-12-13 12:35:23.858855] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 [ 00:29:56.372 { 00:29:56.372 "name": "nvme0n1", 00:29:56.372 "aliases": [ 00:29:56.372 "710b7c8d-0228-49e3-8956-639600c5edae" 00:29:56.372 ], 00:29:56.372 "product_name": "NVMe disk", 00:29:56.372 "block_size": 512, 00:29:56.372 "num_blocks": 2097152, 00:29:56.372 "uuid": "710b7c8d-0228-49e3-8956-639600c5edae", 00:29:56.372 "numa_id": 1, 00:29:56.372 "assigned_rate_limits": { 00:29:56.372 "rw_ios_per_sec": 0, 00:29:56.372 "rw_mbytes_per_sec": 0, 00:29:56.372 "r_mbytes_per_sec": 0, 00:29:56.372 "w_mbytes_per_sec": 0 00:29:56.372 }, 00:29:56.372 "claimed": false, 00:29:56.372 "zoned": false, 00:29:56.372 "supported_io_types": { 00:29:56.372 "read": true, 00:29:56.372 "write": true, 00:29:56.372 "unmap": false, 00:29:56.372 "flush": true, 00:29:56.372 "reset": true, 00:29:56.372 "nvme_admin": true, 00:29:56.372 "nvme_io": true, 00:29:56.372 "nvme_io_md": false, 00:29:56.372 "write_zeroes": true, 00:29:56.372 "zcopy": false, 00:29:56.372 "get_zone_info": false, 00:29:56.372 "zone_management": false, 00:29:56.372 "zone_append": false, 00:29:56.372 "compare": true, 00:29:56.372 "compare_and_write": true, 00:29:56.372 "abort": true, 00:29:56.372 "seek_hole": false, 00:29:56.372 "seek_data": false, 00:29:56.372 "copy": true, 00:29:56.372 "nvme_iov_md": false 00:29:56.372 }, 00:29:56.372 "memory_domains": [ 00:29:56.372 { 00:29:56.372 "dma_device_id": "system", 00:29:56.372 "dma_device_type": 1 00:29:56.372 } 00:29:56.372 ], 00:29:56.372 "driver_specific": { 00:29:56.372 "nvme": [ 00:29:56.372 { 00:29:56.372 "trid": { 00:29:56.372 "trtype": "TCP", 00:29:56.372 "adrfam": "IPv4", 00:29:56.372 "traddr": "10.0.0.2", 00:29:56.372 "trsvcid": "4420", 00:29:56.372 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:56.372 }, 00:29:56.372 "ctrlr_data": { 00:29:56.372 "cntlid": 2, 00:29:56.372 "vendor_id": "0x8086", 00:29:56.372 "model_number": "SPDK bdev Controller", 00:29:56.372 "serial_number": "00000000000000000000", 00:29:56.372 "firmware_revision": "25.01", 00:29:56.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.372 "oacs": { 00:29:56.372 "security": 0, 00:29:56.372 "format": 0, 00:29:56.372 "firmware": 0, 00:29:56.372 "ns_manage": 0 00:29:56.372 }, 00:29:56.372 "multi_ctrlr": true, 00:29:56.372 "ana_reporting": false 00:29:56.372 }, 00:29:56.372 "vs": { 00:29:56.372 "nvme_version": "1.3" 00:29:56.372 }, 00:29:56.372 "ns_data": { 00:29:56.372 "id": 1, 00:29:56.372 "can_share": true 00:29:56.372 } 00:29:56.372 } 00:29:56.372 ], 00:29:56.372 "mp_policy": "active_passive" 00:29:56.372 } 00:29:56.372 } 00:29:56.372 ] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QbjYB519dh 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QbjYB519dh 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.QbjYB519dh 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 [2024-12-13 12:35:23.933356] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:56.372 [2024-12-13 12:35:23.933445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 [2024-12-13 12:35:23.949409] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:56.372 nvme0n1 00:29:56.372 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.372 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:56.372 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.372 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 [ 00:29:56.372 { 00:29:56.372 "name": "nvme0n1", 00:29:56.372 "aliases": [ 00:29:56.372 "710b7c8d-0228-49e3-8956-639600c5edae" 00:29:56.372 ], 00:29:56.372 "product_name": "NVMe disk", 00:29:56.372 "block_size": 512, 00:29:56.373 "num_blocks": 2097152, 00:29:56.373 "uuid": "710b7c8d-0228-49e3-8956-639600c5edae", 00:29:56.373 "numa_id": 1, 00:29:56.373 "assigned_rate_limits": { 00:29:56.373 "rw_ios_per_sec": 0, 00:29:56.373 "rw_mbytes_per_sec": 0, 00:29:56.373 "r_mbytes_per_sec": 0, 00:29:56.373 "w_mbytes_per_sec": 0 00:29:56.373 }, 00:29:56.373 "claimed": false, 00:29:56.373 "zoned": false, 00:29:56.373 "supported_io_types": { 00:29:56.373 "read": true, 00:29:56.373 "write": true, 00:29:56.373 "unmap": false, 00:29:56.373 "flush": true, 00:29:56.373 "reset": true, 00:29:56.373 "nvme_admin": true, 00:29:56.373 "nvme_io": true, 00:29:56.373 "nvme_io_md": false, 00:29:56.373 "write_zeroes": true, 00:29:56.373 "zcopy": false, 00:29:56.373 "get_zone_info": false, 00:29:56.373 "zone_management": false, 00:29:56.373 "zone_append": false, 00:29:56.373 "compare": true, 00:29:56.373 "compare_and_write": true, 00:29:56.373 "abort": true, 00:29:56.373 "seek_hole": false, 00:29:56.373 "seek_data": false, 00:29:56.373 "copy": true, 00:29:56.373 "nvme_iov_md": false 00:29:56.373 }, 00:29:56.373 "memory_domains": [ 00:29:56.373 { 00:29:56.373 "dma_device_id": "system", 00:29:56.373 "dma_device_type": 1 00:29:56.373 } 00:29:56.373 ], 00:29:56.373 "driver_specific": { 00:29:56.373 "nvme": [ 00:29:56.373 { 00:29:56.373 "trid": { 00:29:56.373 "trtype": "TCP", 00:29:56.373 "adrfam": "IPv4", 00:29:56.373 "traddr": "10.0.0.2", 00:29:56.373 "trsvcid": "4421", 00:29:56.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:56.373 }, 00:29:56.373 "ctrlr_data": { 00:29:56.373 "cntlid": 3, 00:29:56.373 "vendor_id": "0x8086", 00:29:56.373 "model_number": "SPDK bdev Controller", 00:29:56.373 "serial_number": "00000000000000000000", 00:29:56.373 "firmware_revision": "25.01", 00:29:56.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.373 "oacs": { 00:29:56.373 "security": 0, 00:29:56.373 "format": 0, 00:29:56.373 "firmware": 0, 00:29:56.373 "ns_manage": 0 00:29:56.373 }, 00:29:56.373 "multi_ctrlr": true, 00:29:56.373 "ana_reporting": false 00:29:56.373 }, 00:29:56.373 "vs": { 00:29:56.373 "nvme_version": "1.3" 00:29:56.373 }, 00:29:56.373 "ns_data": { 00:29:56.373 "id": 1, 00:29:56.373 "can_share": true 00:29:56.373 } 00:29:56.373 } 00:29:56.373 ], 00:29:56.373 "mp_policy": "active_passive" 00:29:56.373 } 00:29:56.373 } 00:29:56.373 ] 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.QbjYB519dh 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.373 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.632 rmmod nvme_tcp 00:29:56.632 rmmod nvme_fabrics 00:29:56.632 rmmod nvme_keyring 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 445640 ']' 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 445640 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 445640 ']' 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 445640 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445640 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445640' 00:29:56.632 killing process with pid 445640 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 445640 00:29:56.632 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 445640 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.633 12:35:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.174 00:29:59.174 real 0m9.381s 00:29:59.174 user 0m3.080s 00:29:59.174 sys 0m4.746s 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 ************************************ 00:29:59.174 END TEST nvmf_async_init 00:29:59.174 ************************************ 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.174 ************************************ 00:29:59.174 START TEST dma 00:29:59.174 ************************************ 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:59.174 * Looking for test storage... 00:29:59.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.174 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.175 --rc genhtml_branch_coverage=1 00:29:59.175 --rc genhtml_function_coverage=1 00:29:59.175 --rc genhtml_legend=1 00:29:59.175 --rc geninfo_all_blocks=1 00:29:59.175 --rc geninfo_unexecuted_blocks=1 00:29:59.175 00:29:59.175 ' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.175 --rc genhtml_branch_coverage=1 00:29:59.175 --rc genhtml_function_coverage=1 00:29:59.175 --rc genhtml_legend=1 00:29:59.175 --rc geninfo_all_blocks=1 00:29:59.175 --rc geninfo_unexecuted_blocks=1 00:29:59.175 00:29:59.175 ' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.175 --rc genhtml_branch_coverage=1 00:29:59.175 --rc genhtml_function_coverage=1 00:29:59.175 --rc genhtml_legend=1 00:29:59.175 --rc geninfo_all_blocks=1 00:29:59.175 --rc geninfo_unexecuted_blocks=1 00:29:59.175 00:29:59.175 ' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:59.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.175 --rc genhtml_branch_coverage=1 00:29:59.175 --rc genhtml_function_coverage=1 00:29:59.175 --rc genhtml_legend=1 00:29:59.175 --rc geninfo_all_blocks=1 00:29:59.175 --rc geninfo_unexecuted_blocks=1 00:29:59.175 00:29:59.175 ' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:59.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:59.175 00:29:59.175 real 0m0.214s 00:29:59.175 user 0m0.131s 00:29:59.175 sys 0m0.097s 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:59.175 ************************************ 00:29:59.175 END TEST dma 00:29:59.175 ************************************ 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.175 ************************************ 00:29:59.175 START TEST nvmf_identify 00:29:59.175 ************************************ 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:59.175 * Looking for test storage... 00:29:59.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:59.175 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.435 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.436 --rc genhtml_branch_coverage=1 00:29:59.436 --rc genhtml_function_coverage=1 00:29:59.436 --rc genhtml_legend=1 00:29:59.436 --rc geninfo_all_blocks=1 00:29:59.436 --rc geninfo_unexecuted_blocks=1 00:29:59.436 00:29:59.436 ' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.436 --rc genhtml_branch_coverage=1 00:29:59.436 --rc genhtml_function_coverage=1 00:29:59.436 --rc genhtml_legend=1 00:29:59.436 --rc geninfo_all_blocks=1 00:29:59.436 --rc geninfo_unexecuted_blocks=1 00:29:59.436 00:29:59.436 ' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.436 --rc genhtml_branch_coverage=1 00:29:59.436 --rc genhtml_function_coverage=1 00:29:59.436 --rc genhtml_legend=1 00:29:59.436 --rc geninfo_all_blocks=1 00:29:59.436 --rc geninfo_unexecuted_blocks=1 00:29:59.436 00:29:59.436 ' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.436 --rc genhtml_branch_coverage=1 00:29:59.436 --rc genhtml_function_coverage=1 00:29:59.436 --rc genhtml_legend=1 00:29:59.436 --rc geninfo_all_blocks=1 00:29:59.436 --rc geninfo_unexecuted_blocks=1 00:29:59.436 00:29:59.436 ' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:59.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.436 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.437 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.011 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:06.012 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:06.012 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:06.012 Found net devices under 0000:af:00.0: cvl_0_0 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:06.012 Found net devices under 0000:af:00.1: cvl_0_1 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:30:06.012 00:30:06.012 --- 10.0.0.2 ping statistics --- 00:30:06.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.012 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:30:06.012 00:30:06.012 --- 10.0.0.1 ping statistics --- 00:30:06.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.012 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=449389 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 449389 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 449389 ']' 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.012 12:35:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.012 [2024-12-13 12:35:32.928765] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:06.012 [2024-12-13 12:35:32.928820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.012 [2024-12-13 12:35:33.004897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.012 [2024-12-13 12:35:33.029823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.012 [2024-12-13 12:35:33.029862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.012 [2024-12-13 12:35:33.029869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.013 [2024-12-13 12:35:33.029875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.013 [2024-12-13 12:35:33.029880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.013 [2024-12-13 12:35:33.031358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.013 [2024-12-13 12:35:33.031468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.013 [2024-12-13 12:35:33.031557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.013 [2024-12-13 12:35:33.031559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 [2024-12-13 12:35:33.128159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 Malloc0 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 [2024-12-13 12:35:33.228474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.013 [ 00:30:06.013 { 00:30:06.013 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:06.013 "subtype": "Discovery", 00:30:06.013 "listen_addresses": [ 00:30:06.013 { 00:30:06.013 "trtype": "TCP", 00:30:06.013 "adrfam": "IPv4", 00:30:06.013 "traddr": "10.0.0.2", 00:30:06.013 "trsvcid": "4420" 00:30:06.013 } 00:30:06.013 ], 00:30:06.013 "allow_any_host": true, 00:30:06.013 "hosts": [] 00:30:06.013 }, 00:30:06.013 { 00:30:06.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.013 "subtype": "NVMe", 00:30:06.013 "listen_addresses": [ 00:30:06.013 { 00:30:06.013 "trtype": "TCP", 00:30:06.013 "adrfam": "IPv4", 00:30:06.013 "traddr": "10.0.0.2", 00:30:06.013 "trsvcid": "4420" 00:30:06.013 } 00:30:06.013 ], 00:30:06.013 "allow_any_host": true, 00:30:06.013 "hosts": [], 00:30:06.013 "serial_number": "SPDK00000000000001", 00:30:06.013 "model_number": "SPDK bdev Controller", 00:30:06.013 "max_namespaces": 32, 00:30:06.013 "min_cntlid": 1, 00:30:06.013 "max_cntlid": 65519, 00:30:06.013 "namespaces": [ 00:30:06.013 { 00:30:06.013 "nsid": 1, 00:30:06.013 "bdev_name": "Malloc0", 00:30:06.013 "name": "Malloc0", 00:30:06.013 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:06.013 "eui64": "ABCDEF0123456789", 00:30:06.013 "uuid": "7a32584c-f978-4f5b-a0d2-d8fb05573a0e" 00:30:06.013 } 00:30:06.013 ] 00:30:06.013 } 00:30:06.013 ] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.013 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:06.013 [2024-12-13 12:35:33.282451] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:06.013 [2024-12-13 12:35:33.282486] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449444 ] 00:30:06.013 [2024-12-13 12:35:33.321138] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:06.013 [2024-12-13 12:35:33.321180] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.013 [2024-12-13 12:35:33.321188] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.013 [2024-12-13 12:35:33.321198] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.013 [2024-12-13 12:35:33.321208] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.013 [2024-12-13 12:35:33.325024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:06.013 [2024-12-13 12:35:33.325059] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc22de0 0 00:30:06.013 [2024-12-13 12:35:33.331795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.013 [2024-12-13 12:35:33.331810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.013 [2024-12-13 12:35:33.331814] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.013 [2024-12-13 12:35:33.331817] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.013 [2024-12-13 12:35:33.331845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.331851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.331855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.013 [2024-12-13 12:35:33.331867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.013 [2024-12-13 12:35:33.331884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.013 [2024-12-13 12:35:33.338792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.013 [2024-12-13 12:35:33.338801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.013 [2024-12-13 12:35:33.338805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.338809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.013 [2024-12-13 12:35:33.338819] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.013 [2024-12-13 12:35:33.338825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:06.013 [2024-12-13 12:35:33.338829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:06.013 [2024-12-13 12:35:33.338842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.338845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.338849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.013 [2024-12-13 12:35:33.338856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.013 [2024-12-13 12:35:33.338868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.013 [2024-12-13 12:35:33.339055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.013 [2024-12-13 12:35:33.339060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.013 [2024-12-13 12:35:33.339063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.339067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.013 [2024-12-13 12:35:33.339072] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:06.013 [2024-12-13 12:35:33.339079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:06.013 [2024-12-13 12:35:33.339085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.339089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.339092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.013 [2024-12-13 12:35:33.339100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.013 [2024-12-13 12:35:33.339110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.013 [2024-12-13 12:35:33.339192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.013 [2024-12-13 12:35:33.339198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.013 [2024-12-13 12:35:33.339201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.013 [2024-12-13 12:35:33.339204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.013 [2024-12-13 12:35:33.339208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:06.013 [2024-12-13 12:35:33.339215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.013 [2024-12-13 12:35:33.339221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.339232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.014 [2024-12-13 12:35:33.339242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.014 [2024-12-13 12:35:33.339340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.014 [2024-12-13 12:35:33.339345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.014 [2024-12-13 12:35:33.339349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.014 [2024-12-13 12:35:33.339356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.014 [2024-12-13 12:35:33.339364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.339376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.014 [2024-12-13 12:35:33.339386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.014 [2024-12-13 12:35:33.339490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.014 [2024-12-13 12:35:33.339496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.014 [2024-12-13 12:35:33.339499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.014 [2024-12-13 12:35:33.339506] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.014 [2024-12-13 12:35:33.339510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.014 [2024-12-13 12:35:33.339517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.014 [2024-12-13 12:35:33.339625] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:06.014 [2024-12-13 12:35:33.339630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.014 [2024-12-13 12:35:33.339639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.339650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.014 [2024-12-13 12:35:33.339660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.014 [2024-12-13 12:35:33.339724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.014 [2024-12-13 12:35:33.339730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.014 [2024-12-13 12:35:33.339733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.014 [2024-12-13 12:35:33.339740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.014 [2024-12-13 12:35:33.339748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.339760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.014 [2024-12-13 12:35:33.339769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.014 [2024-12-13 12:35:33.339878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.014 [2024-12-13 12:35:33.339884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.014 [2024-12-13 12:35:33.339887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.014 [2024-12-13 12:35:33.339894] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.014 [2024-12-13 12:35:33.339898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.014 [2024-12-13 12:35:33.339905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:06.014 [2024-12-13 12:35:33.339912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.014 [2024-12-13 12:35:33.339919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.339922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.339928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.014 [2024-12-13 12:35:33.339938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.014 [2024-12-13 12:35:33.340028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.014 [2024-12-13 12:35:33.340034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.014 [2024-12-13 12:35:33.340037] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc22de0): datao=0, datal=4096, cccid=0 00:30:06.014 [2024-12-13 12:35:33.340045] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7df40) on tqpair(0xc22de0): expected_datao=0, payload_size=4096 00:30:06.014 [2024-12-13 12:35:33.340049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340057] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340061] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.014 [2024-12-13 12:35:33.340083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.014 [2024-12-13 12:35:33.340086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.014 [2024-12-13 12:35:33.340097] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:06.014 [2024-12-13 12:35:33.340102] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:06.014 [2024-12-13 12:35:33.340106] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:06.014 [2024-12-13 12:35:33.340110] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:06.014 [2024-12-13 12:35:33.340114] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:06.014 [2024-12-13 12:35:33.340118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.014 [2024-12-13 12:35:33.340128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.014 [2024-12-13 12:35:33.340135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.340147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.014 [2024-12-13 12:35:33.340157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.014 [2024-12-13 12:35:33.340230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.014 [2024-12-13 12:35:33.340235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.014 [2024-12-13 12:35:33.340238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.014 [2024-12-13 12:35:33.340248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.340260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.014 [2024-12-13 12:35:33.340265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.340276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.014 [2024-12-13 12:35:33.340281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc22de0) 00:30:06.014 [2024-12-13 12:35:33.340291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.014 [2024-12-13 12:35:33.340300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.014 [2024-12-13 12:35:33.340306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.340311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.015 [2024-12-13 12:35:33.340315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.015 [2024-12-13 12:35:33.340325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.015 [2024-12-13 12:35:33.340331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.340339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.015 [2024-12-13 12:35:33.340350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7df40, cid 0, qid 0 00:30:06.015 [2024-12-13 12:35:33.340354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e0c0, cid 1, qid 0 00:30:06.015 [2024-12-13 12:35:33.340358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e240, cid 2, qid 0 00:30:06.015 [2024-12-13 12:35:33.340363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.015 [2024-12-13 12:35:33.340367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e540, cid 4, qid 0 00:30:06.015 [2024-12-13 12:35:33.340482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.015 [2024-12-13 12:35:33.340488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.015 [2024-12-13 12:35:33.340491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e540) on tqpair=0xc22de0 00:30:06.015 [2024-12-13 12:35:33.340498] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:06.015 [2024-12-13 12:35:33.340502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:06.015 [2024-12-13 12:35:33.340511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.340520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.015 [2024-12-13 12:35:33.340528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e540, cid 4, qid 0 00:30:06.015 [2024-12-13 12:35:33.340603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.015 [2024-12-13 12:35:33.340609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.015 [2024-12-13 12:35:33.340612] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc22de0): datao=0, datal=4096, cccid=4 00:30:06.015 [2024-12-13 12:35:33.340619] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7e540) on tqpair(0xc22de0): expected_datao=0, payload_size=4096 00:30:06.015 [2024-12-13 12:35:33.340623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340651] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340655] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.015 [2024-12-13 12:35:33.340740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.015 [2024-12-13 12:35:33.340743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e540) on tqpair=0xc22de0 00:30:06.015 [2024-12-13 12:35:33.340756] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:06.015 [2024-12-13 12:35:33.340778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.340794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.015 [2024-12-13 12:35:33.340800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.340811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.015 [2024-12-13 12:35:33.340824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e540, cid 4, qid 0 00:30:06.015 [2024-12-13 12:35:33.340829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e6c0, cid 5, qid 0 00:30:06.015 [2024-12-13 12:35:33.340931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.015 [2024-12-13 12:35:33.340937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.015 [2024-12-13 12:35:33.340940] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340943] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc22de0): datao=0, datal=1024, cccid=4 00:30:06.015 [2024-12-13 12:35:33.340948] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7e540) on tqpair(0xc22de0): expected_datao=0, payload_size=1024 00:30:06.015 [2024-12-13 12:35:33.340951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340957] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340960] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.015 [2024-12-13 12:35:33.340969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.015 [2024-12-13 12:35:33.340972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.340975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e6c0) on tqpair=0xc22de0 00:30:06.015 [2024-12-13 12:35:33.384793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.015 [2024-12-13 12:35:33.384803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.015 [2024-12-13 12:35:33.384807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.384810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e540) on tqpair=0xc22de0 00:30:06.015 [2024-12-13 12:35:33.384820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.384824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.384830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.015 [2024-12-13 12:35:33.384846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e540, cid 4, qid 0 00:30:06.015 [2024-12-13 12:35:33.384943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.015 [2024-12-13 12:35:33.384949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.015 [2024-12-13 12:35:33.384952] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.384955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc22de0): datao=0, datal=3072, cccid=4 00:30:06.015 [2024-12-13 12:35:33.384962] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7e540) on tqpair(0xc22de0): expected_datao=0, payload_size=3072 00:30:06.015 [2024-12-13 12:35:33.384966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.384971] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.384975] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.385046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.015 [2024-12-13 12:35:33.385052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.015 [2024-12-13 12:35:33.385055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.385058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e540) on tqpair=0xc22de0 00:30:06.015 [2024-12-13 12:35:33.385065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.385069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc22de0) 00:30:06.015 [2024-12-13 12:35:33.385074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.015 [2024-12-13 12:35:33.385087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e540, cid 4, qid 0 00:30:06.015 [2024-12-13 12:35:33.385197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.015 [2024-12-13 12:35:33.385202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.015 [2024-12-13 12:35:33.385205] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.385208] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc22de0): datao=0, datal=8, cccid=4 00:30:06.015 [2024-12-13 12:35:33.385212] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7e540) on tqpair(0xc22de0): expected_datao=0, payload_size=8 00:30:06.015 [2024-12-13 12:35:33.385216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.385221] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.385224] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.425909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.015 [2024-12-13 12:35:33.425920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.015 [2024-12-13 12:35:33.425923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.015 [2024-12-13 12:35:33.425926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e540) on tqpair=0xc22de0 00:30:06.015 ===================================================== 00:30:06.015 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:06.015 ===================================================== 00:30:06.015 Controller Capabilities/Features 00:30:06.015 ================================ 00:30:06.015 Vendor ID: 0000 00:30:06.015 Subsystem Vendor ID: 0000 00:30:06.015 Serial Number: .................... 00:30:06.015 Model Number: ........................................ 00:30:06.015 Firmware Version: 25.01 00:30:06.015 Recommended Arb Burst: 0 00:30:06.015 IEEE OUI Identifier: 00 00 00 00:30:06.015 Multi-path I/O 00:30:06.015 May have multiple subsystem ports: No 00:30:06.015 May have multiple controllers: No 00:30:06.015 Associated with SR-IOV VF: No 00:30:06.015 Max Data Transfer Size: 131072 00:30:06.015 Max Number of Namespaces: 0 00:30:06.015 Max Number of I/O Queues: 1024 00:30:06.015 NVMe Specification Version (VS): 1.3 00:30:06.015 NVMe Specification Version (Identify): 1.3 00:30:06.016 Maximum Queue Entries: 128 00:30:06.016 Contiguous Queues Required: Yes 00:30:06.016 Arbitration Mechanisms Supported 00:30:06.016 Weighted Round Robin: Not Supported 00:30:06.016 Vendor Specific: Not Supported 00:30:06.016 Reset Timeout: 15000 ms 00:30:06.016 Doorbell Stride: 4 bytes 00:30:06.016 NVM Subsystem Reset: Not Supported 00:30:06.016 Command Sets Supported 00:30:06.016 NVM Command Set: Supported 00:30:06.016 Boot Partition: Not Supported 00:30:06.016 Memory Page Size Minimum: 4096 bytes 00:30:06.016 Memory Page Size Maximum: 4096 bytes 00:30:06.016 Persistent Memory Region: Not Supported 00:30:06.016 Optional Asynchronous Events Supported 00:30:06.016 Namespace Attribute Notices: Not Supported 00:30:06.016 Firmware Activation Notices: Not Supported 00:30:06.016 ANA Change Notices: Not Supported 00:30:06.016 PLE Aggregate Log Change Notices: Not Supported 00:30:06.016 LBA Status Info Alert Notices: Not Supported 00:30:06.016 EGE Aggregate Log Change Notices: Not Supported 00:30:06.016 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.016 Zone Descriptor Change Notices: Not Supported 00:30:06.016 Discovery Log Change Notices: Supported 00:30:06.016 Controller Attributes 00:30:06.016 128-bit Host Identifier: Not Supported 00:30:06.016 Non-Operational Permissive Mode: Not Supported 00:30:06.016 NVM Sets: Not Supported 00:30:06.016 Read Recovery Levels: Not Supported 00:30:06.016 Endurance Groups: Not Supported 00:30:06.016 Predictable Latency Mode: Not Supported 00:30:06.016 Traffic Based Keep ALive: Not Supported 00:30:06.016 Namespace Granularity: Not Supported 00:30:06.016 SQ Associations: Not Supported 00:30:06.016 UUID List: Not Supported 00:30:06.016 Multi-Domain Subsystem: Not Supported 00:30:06.016 Fixed Capacity Management: Not Supported 00:30:06.016 Variable Capacity Management: Not Supported 00:30:06.016 Delete Endurance Group: Not Supported 00:30:06.016 Delete NVM Set: Not Supported 00:30:06.016 Extended LBA Formats Supported: Not Supported 00:30:06.016 Flexible Data Placement Supported: Not Supported 00:30:06.016 00:30:06.016 Controller Memory Buffer Support 00:30:06.016 ================================ 00:30:06.016 Supported: No 00:30:06.016 00:30:06.016 Persistent Memory Region Support 00:30:06.016 ================================ 00:30:06.016 Supported: No 00:30:06.016 00:30:06.016 Admin Command Set Attributes 00:30:06.016 ============================ 00:30:06.016 Security Send/Receive: Not Supported 00:30:06.016 Format NVM: Not Supported 00:30:06.016 Firmware Activate/Download: Not Supported 00:30:06.016 Namespace Management: Not Supported 00:30:06.016 Device Self-Test: Not Supported 00:30:06.016 Directives: Not Supported 00:30:06.016 NVMe-MI: Not Supported 00:30:06.016 Virtualization Management: Not Supported 00:30:06.016 Doorbell Buffer Config: Not Supported 00:30:06.016 Get LBA Status Capability: Not Supported 00:30:06.016 Command & Feature Lockdown Capability: Not Supported 00:30:06.016 Abort Command Limit: 1 00:30:06.016 Async Event Request Limit: 4 00:30:06.016 Number of Firmware Slots: N/A 00:30:06.016 Firmware Slot 1 Read-Only: N/A 00:30:06.016 Firmware Activation Without Reset: N/A 00:30:06.016 Multiple Update Detection Support: N/A 00:30:06.016 Firmware Update Granularity: No Information Provided 00:30:06.016 Per-Namespace SMART Log: No 00:30:06.016 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.016 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:06.016 Command Effects Log Page: Not Supported 00:30:06.016 Get Log Page Extended Data: Supported 00:30:06.016 Telemetry Log Pages: Not Supported 00:30:06.016 Persistent Event Log Pages: Not Supported 00:30:06.016 Supported Log Pages Log Page: May Support 00:30:06.016 Commands Supported & Effects Log Page: Not Supported 00:30:06.016 Feature Identifiers & Effects Log Page:May Support 00:30:06.016 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.016 Data Area 4 for Telemetry Log: Not Supported 00:30:06.016 Error Log Page Entries Supported: 128 00:30:06.016 Keep Alive: Not Supported 00:30:06.016 00:30:06.016 NVM Command Set Attributes 00:30:06.016 ========================== 00:30:06.016 Submission Queue Entry Size 00:30:06.016 Max: 1 00:30:06.016 Min: 1 00:30:06.016 Completion Queue Entry Size 00:30:06.016 Max: 1 00:30:06.016 Min: 1 00:30:06.016 Number of Namespaces: 0 00:30:06.016 Compare Command: Not Supported 00:30:06.016 Write Uncorrectable Command: Not Supported 00:30:06.016 Dataset Management Command: Not Supported 00:30:06.016 Write Zeroes Command: Not Supported 00:30:06.016 Set Features Save Field: Not Supported 00:30:06.016 Reservations: Not Supported 00:30:06.016 Timestamp: Not Supported 00:30:06.016 Copy: Not Supported 00:30:06.016 Volatile Write Cache: Not Present 00:30:06.016 Atomic Write Unit (Normal): 1 00:30:06.016 Atomic Write Unit (PFail): 1 00:30:06.016 Atomic Compare & Write Unit: 1 00:30:06.016 Fused Compare & Write: Supported 00:30:06.016 Scatter-Gather List 00:30:06.016 SGL Command Set: Supported 00:30:06.016 SGL Keyed: Supported 00:30:06.016 SGL Bit Bucket Descriptor: Not Supported 00:30:06.016 SGL Metadata Pointer: Not Supported 00:30:06.016 Oversized SGL: Not Supported 00:30:06.016 SGL Metadata Address: Not Supported 00:30:06.016 SGL Offset: Supported 00:30:06.016 Transport SGL Data Block: Not Supported 00:30:06.016 Replay Protected Memory Block: Not Supported 00:30:06.016 00:30:06.016 Firmware Slot Information 00:30:06.016 ========================= 00:30:06.016 Active slot: 0 00:30:06.016 00:30:06.016 00:30:06.016 Error Log 00:30:06.016 ========= 00:30:06.016 00:30:06.016 Active Namespaces 00:30:06.016 ================= 00:30:06.016 Discovery Log Page 00:30:06.016 ================== 00:30:06.016 Generation Counter: 2 00:30:06.016 Number of Records: 2 00:30:06.016 Record Format: 0 00:30:06.016 00:30:06.016 Discovery Log Entry 0 00:30:06.016 ---------------------- 00:30:06.016 Transport Type: 3 (TCP) 00:30:06.016 Address Family: 1 (IPv4) 00:30:06.016 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:06.016 Entry Flags: 00:30:06.016 Duplicate Returned Information: 1 00:30:06.016 Explicit Persistent Connection Support for Discovery: 1 00:30:06.016 Transport Requirements: 00:30:06.016 Secure Channel: Not Required 00:30:06.016 Port ID: 0 (0x0000) 00:30:06.016 Controller ID: 65535 (0xffff) 00:30:06.016 Admin Max SQ Size: 128 00:30:06.016 Transport Service Identifier: 4420 00:30:06.016 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:06.016 Transport Address: 10.0.0.2 00:30:06.016 Discovery Log Entry 1 00:30:06.016 ---------------------- 00:30:06.016 Transport Type: 3 (TCP) 00:30:06.016 Address Family: 1 (IPv4) 00:30:06.016 Subsystem Type: 2 (NVM Subsystem) 00:30:06.016 Entry Flags: 00:30:06.016 Duplicate Returned Information: 0 00:30:06.016 Explicit Persistent Connection Support for Discovery: 0 00:30:06.016 Transport Requirements: 00:30:06.016 Secure Channel: Not Required 00:30:06.016 Port ID: 0 (0x0000) 00:30:06.016 Controller ID: 65535 (0xffff) 00:30:06.016 Admin Max SQ Size: 128 00:30:06.016 Transport Service Identifier: 4420 00:30:06.016 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:06.017 Transport Address: 10.0.0.2 [2024-12-13 12:35:33.426008] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:06.017 [2024-12-13 12:35:33.426020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7df40) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.017 [2024-12-13 12:35:33.426031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e0c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.017 [2024-12-13 12:35:33.426040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e240) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.017 [2024-12-13 12:35:33.426048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.017 [2024-12-13 12:35:33.426060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.426147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.426153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.426156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.426303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.426309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.426312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426319] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:06.017 [2024-12-13 12:35:33.426324] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:06.017 [2024-12-13 12:35:33.426332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.426474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.426479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.426482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.426607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.426613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.426616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.426711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.426716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.426719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.426858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.426865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.426869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.426881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.426888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.426893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.426903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.427010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.427016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.427020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.427032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.427047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.427057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.427160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.427166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.427168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.427179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.427193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.427202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.427266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.427271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.427274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.427286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.427297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.427307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.427413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.427419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.017 [2024-12-13 12:35:33.427422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.017 [2024-12-13 12:35:33.427433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.017 [2024-12-13 12:35:33.427439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.017 [2024-12-13 12:35:33.427445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.017 [2024-12-13 12:35:33.427454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.017 [2024-12-13 12:35:33.427564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.017 [2024-12-13 12:35:33.427570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.427573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.427584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.427596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.427606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.427715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.427721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.427725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.427737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.427752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.427762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.427824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.427831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.427835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.427848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.427860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.427870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.427968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.427974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.427977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.427988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.427994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.428000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.428009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.428125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.428131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.428134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.428146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.428158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.428168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.428269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.428275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.428278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.428289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.428303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.428312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.428376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.428381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.428384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.428395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.428407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.428416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.428522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.428527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.428530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.428542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.428553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.428562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.428672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.428678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.428681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.428692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.428698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.428704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.428713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.428774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.428779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.432792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.432795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.432804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.432808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.432811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc22de0) 00:30:06.018 [2024-12-13 12:35:33.432817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.018 [2024-12-13 12:35:33.432830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e3c0, cid 3, qid 0 00:30:06.018 [2024-12-13 12:35:33.432987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.018 [2024-12-13 12:35:33.432994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.018 [2024-12-13 12:35:33.432997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.018 [2024-12-13 12:35:33.433000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e3c0) on tqpair=0xc22de0 00:30:06.018 [2024-12-13 12:35:33.433007] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:30:06.018 00:30:06.018 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:06.018 [2024-12-13 12:35:33.469169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:06.018 [2024-12-13 12:35:33.469213] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449546 ] 00:30:06.018 [2024-12-13 12:35:33.506054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:06.019 [2024-12-13 12:35:33.506092] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.019 [2024-12-13 12:35:33.506097] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.019 [2024-12-13 12:35:33.506108] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.019 [2024-12-13 12:35:33.506115] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.019 [2024-12-13 12:35:33.509925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:06.019 [2024-12-13 12:35:33.509953] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcb1de0 0 00:30:06.019 [2024-12-13 12:35:33.517800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.019 [2024-12-13 12:35:33.517816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.019 [2024-12-13 12:35:33.517819] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.019 [2024-12-13 12:35:33.517822] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.019 [2024-12-13 12:35:33.517843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.517848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.517851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.517861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.019 [2024-12-13 12:35:33.517877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.525793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.525804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.525807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.525811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.525819] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.019 [2024-12-13 12:35:33.525824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:06.019 [2024-12-13 12:35:33.525832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:06.019 [2024-12-13 12:35:33.525842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.525845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.525849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.525855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.525867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.526001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.526007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.526010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.526018] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:06.019 [2024-12-13 12:35:33.526024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:06.019 [2024-12-13 12:35:33.526031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.526042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.526053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.526133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.526139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.526142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.526149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:06.019 [2024-12-13 12:35:33.526156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.019 [2024-12-13 12:35:33.526162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.526174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.526183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.526284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.526290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.526293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.526300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.019 [2024-12-13 12:35:33.526308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.526323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.526332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.526435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.526441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.526444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.526451] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.019 [2024-12-13 12:35:33.526455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.019 [2024-12-13 12:35:33.526461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.019 [2024-12-13 12:35:33.526569] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:06.019 [2024-12-13 12:35:33.526573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.019 [2024-12-13 12:35:33.526580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.526592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.526601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.526709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.526714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.526717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.526724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.019 [2024-12-13 12:35:33.526733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.526745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.526754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.526868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.019 [2024-12-13 12:35:33.526874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.019 [2024-12-13 12:35:33.526877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.019 [2024-12-13 12:35:33.526884] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.019 [2024-12-13 12:35:33.526891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.019 [2024-12-13 12:35:33.526897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:06.019 [2024-12-13 12:35:33.526907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.019 [2024-12-13 12:35:33.526915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.526918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.019 [2024-12-13 12:35:33.526924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.019 [2024-12-13 12:35:33.526934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.019 [2024-12-13 12:35:33.527055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.019 [2024-12-13 12:35:33.527061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.019 [2024-12-13 12:35:33.527064] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.527067] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=4096, cccid=0 00:30:06.019 [2024-12-13 12:35:33.527071] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0cf40) on tqpair(0xcb1de0): expected_datao=0, payload_size=4096 00:30:06.019 [2024-12-13 12:35:33.527074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.527081] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.019 [2024-12-13 12:35:33.527085] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.020 [2024-12-13 12:35:33.527125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.020 [2024-12-13 12:35:33.527128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.020 [2024-12-13 12:35:33.527138] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:06.020 [2024-12-13 12:35:33.527142] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:06.020 [2024-12-13 12:35:33.527146] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:06.020 [2024-12-13 12:35:33.527150] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:06.020 [2024-12-13 12:35:33.527154] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:06.020 [2024-12-13 12:35:33.527158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.020 [2024-12-13 12:35:33.527204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.020 [2024-12-13 12:35:33.527273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.020 [2024-12-13 12:35:33.527279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.020 [2024-12-13 12:35:33.527284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.020 [2024-12-13 12:35:33.527295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.020 [2024-12-13 12:35:33.527312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.020 [2024-12-13 12:35:33.527328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.020 [2024-12-13 12:35:33.527344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.020 [2024-12-13 12:35:33.527359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.020 [2024-12-13 12:35:33.527393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0cf40, cid 0, qid 0 00:30:06.020 [2024-12-13 12:35:33.527398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d0c0, cid 1, qid 0 00:30:06.020 [2024-12-13 12:35:33.527402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d240, cid 2, qid 0 00:30:06.020 [2024-12-13 12:35:33.527406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.020 [2024-12-13 12:35:33.527410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.020 [2024-12-13 12:35:33.527525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.020 [2024-12-13 12:35:33.527532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.020 [2024-12-13 12:35:33.527535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.020 [2024-12-13 12:35:33.527542] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:06.020 [2024-12-13 12:35:33.527547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.020 [2024-12-13 12:35:33.527596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.020 [2024-12-13 12:35:33.527658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.020 [2024-12-13 12:35:33.527663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.020 [2024-12-13 12:35:33.527666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.020 [2024-12-13 12:35:33.527718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.020 [2024-12-13 12:35:33.527751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.020 [2024-12-13 12:35:33.527838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.020 [2024-12-13 12:35:33.527845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.020 [2024-12-13 12:35:33.527848] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527853] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=4096, cccid=4 00:30:06.020 [2024-12-13 12:35:33.527858] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d540) on tqpair(0xcb1de0): expected_datao=0, payload_size=4096 00:30:06.020 [2024-12-13 12:35:33.527864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527877] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.020 [2024-12-13 12:35:33.527898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.020 [2024-12-13 12:35:33.527902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.020 [2024-12-13 12:35:33.527916] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:06.020 [2024-12-13 12:35:33.527925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:06.020 [2024-12-13 12:35:33.527947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.527952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.020 [2024-12-13 12:35:33.527959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.020 [2024-12-13 12:35:33.527971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.020 [2024-12-13 12:35:33.528100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.020 [2024-12-13 12:35:33.528110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.020 [2024-12-13 12:35:33.528121] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.020 [2024-12-13 12:35:33.528130] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=4096, cccid=4 00:30:06.020 [2024-12-13 12:35:33.528139] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d540) on tqpair(0xcb1de0): expected_datao=0, payload_size=4096 00:30:06.020 [2024-12-13 12:35:33.528146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528155] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528161] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.528176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.528181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.528199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.528233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.528246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.021 [2024-12-13 12:35:33.528323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.021 [2024-12-13 12:35:33.528330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.021 [2024-12-13 12:35:33.528332] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528336] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=4096, cccid=4 00:30:06.021 [2024-12-13 12:35:33.528339] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d540) on tqpair(0xcb1de0): expected_datao=0, payload_size=4096 00:30:06.021 [2024-12-13 12:35:33.528343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528349] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528352] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.528368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.528371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.528387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528425] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:06.021 [2024-12-13 12:35:33.528429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:06.021 [2024-12-13 12:35:33.528433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:06.021 [2024-12-13 12:35:33.528446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.528455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.528461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.528474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.021 [2024-12-13 12:35:33.528489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.021 [2024-12-13 12:35:33.528495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d6c0, cid 5, qid 0 00:30:06.021 [2024-12-13 12:35:33.528613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.528619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.528622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.528631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.528636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.528640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d6c0) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.528650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.528659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.528668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d6c0, cid 5, qid 0 00:30:06.021 [2024-12-13 12:35:33.528762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.528768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.528772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d6c0) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.528794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.528808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.528819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d6c0, cid 5, qid 0 00:30:06.021 [2024-12-13 12:35:33.528922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.528929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.528934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d6c0) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.528949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.528955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.528960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.528970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d6c0, cid 5, qid 0 00:30:06.021 [2024-12-13 12:35:33.529031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.021 [2024-12-13 12:35:33.529037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.021 [2024-12-13 12:35:33.529040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.529043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d6c0) on tqpair=0xcb1de0 00:30:06.021 [2024-12-13 12:35:33.529055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.529061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.529066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.529072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.529075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.529081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.529087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.529090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.529095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.529101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.021 [2024-12-13 12:35:33.529107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcb1de0) 00:30:06.021 [2024-12-13 12:35:33.529113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.021 [2024-12-13 12:35:33.529124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d6c0, cid 5, qid 0 00:30:06.021 [2024-12-13 12:35:33.529128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d540, cid 4, qid 0 00:30:06.021 [2024-12-13 12:35:33.529133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d840, cid 6, qid 0 00:30:06.022 [2024-12-13 12:35:33.529138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d9c0, cid 7, qid 0 00:30:06.022 [2024-12-13 12:35:33.529274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.022 [2024-12-13 12:35:33.529282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.022 [2024-12-13 12:35:33.529286] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529290] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=8192, cccid=5 00:30:06.022 [2024-12-13 12:35:33.529294] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d6c0) on tqpair(0xcb1de0): expected_datao=0, payload_size=8192 00:30:06.022 [2024-12-13 12:35:33.529299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529329] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529333] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.022 [2024-12-13 12:35:33.529343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.022 [2024-12-13 12:35:33.529346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=512, cccid=4 00:30:06.022 [2024-12-13 12:35:33.529352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d540) on tqpair(0xcb1de0): expected_datao=0, payload_size=512 00:30:06.022 [2024-12-13 12:35:33.529356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529361] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529364] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.022 [2024-12-13 12:35:33.529374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.022 [2024-12-13 12:35:33.529377] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529380] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=512, cccid=6 00:30:06.022 [2024-12-13 12:35:33.529384] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d840) on tqpair(0xcb1de0): expected_datao=0, payload_size=512 00:30:06.022 [2024-12-13 12:35:33.529387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529396] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.022 [2024-12-13 12:35:33.529405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.022 [2024-12-13 12:35:33.529408] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529411] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb1de0): datao=0, datal=4096, cccid=7 00:30:06.022 [2024-12-13 12:35:33.529415] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd0d9c0) on tqpair(0xcb1de0): expected_datao=0, payload_size=4096 00:30:06.022 [2024-12-13 12:35:33.529418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529424] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529427] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.022 [2024-12-13 12:35:33.529442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.022 [2024-12-13 12:35:33.529445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d6c0) on tqpair=0xcb1de0 00:30:06.022 [2024-12-13 12:35:33.529458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.022 [2024-12-13 12:35:33.529463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.022 [2024-12-13 12:35:33.529466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d540) on tqpair=0xcb1de0 00:30:06.022 [2024-12-13 12:35:33.529481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.022 [2024-12-13 12:35:33.529486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.022 [2024-12-13 12:35:33.529489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d840) on tqpair=0xcb1de0 00:30:06.022 [2024-12-13 12:35:33.529498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.022 [2024-12-13 12:35:33.529503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.022 [2024-12-13 12:35:33.529506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.022 [2024-12-13 12:35:33.529509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d9c0) on tqpair=0xcb1de0 00:30:06.022 ===================================================== 00:30:06.022 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.022 ===================================================== 00:30:06.022 Controller Capabilities/Features 00:30:06.022 ================================ 00:30:06.022 Vendor ID: 8086 00:30:06.022 Subsystem Vendor ID: 8086 00:30:06.022 Serial Number: SPDK00000000000001 00:30:06.022 Model Number: SPDK bdev Controller 00:30:06.022 Firmware Version: 25.01 00:30:06.022 Recommended Arb Burst: 6 00:30:06.022 IEEE OUI Identifier: e4 d2 5c 00:30:06.022 Multi-path I/O 00:30:06.022 May have multiple subsystem ports: Yes 00:30:06.022 May have multiple controllers: Yes 00:30:06.022 Associated with SR-IOV VF: No 00:30:06.022 Max Data Transfer Size: 131072 00:30:06.022 Max Number of Namespaces: 32 00:30:06.022 Max Number of I/O Queues: 127 00:30:06.022 NVMe Specification Version (VS): 1.3 00:30:06.022 NVMe Specification Version (Identify): 1.3 00:30:06.022 Maximum Queue Entries: 128 00:30:06.022 Contiguous Queues Required: Yes 00:30:06.022 Arbitration Mechanisms Supported 00:30:06.022 Weighted Round Robin: Not Supported 00:30:06.022 Vendor Specific: Not Supported 00:30:06.022 Reset Timeout: 15000 ms 00:30:06.022 Doorbell Stride: 4 bytes 00:30:06.022 NVM Subsystem Reset: Not Supported 00:30:06.022 Command Sets Supported 00:30:06.022 NVM Command Set: Supported 00:30:06.022 Boot Partition: Not Supported 00:30:06.022 Memory Page Size Minimum: 4096 bytes 00:30:06.022 Memory Page Size Maximum: 4096 bytes 00:30:06.022 Persistent Memory Region: Not Supported 00:30:06.022 Optional Asynchronous Events Supported 00:30:06.022 Namespace Attribute Notices: Supported 00:30:06.022 Firmware Activation Notices: Not Supported 00:30:06.022 ANA Change Notices: Not Supported 00:30:06.022 PLE Aggregate Log Change Notices: Not Supported 00:30:06.022 LBA Status Info Alert Notices: Not Supported 00:30:06.022 EGE Aggregate Log Change Notices: Not Supported 00:30:06.022 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.022 Zone Descriptor Change Notices: Not Supported 00:30:06.022 Discovery Log Change Notices: Not Supported 00:30:06.022 Controller Attributes 00:30:06.022 128-bit Host Identifier: Supported 00:30:06.022 Non-Operational Permissive Mode: Not Supported 00:30:06.022 NVM Sets: Not Supported 00:30:06.022 Read Recovery Levels: Not Supported 00:30:06.022 Endurance Groups: Not Supported 00:30:06.022 Predictable Latency Mode: Not Supported 00:30:06.022 Traffic Based Keep ALive: Not Supported 00:30:06.022 Namespace Granularity: Not Supported 00:30:06.022 SQ Associations: Not Supported 00:30:06.022 UUID List: Not Supported 00:30:06.022 Multi-Domain Subsystem: Not Supported 00:30:06.022 Fixed Capacity Management: Not Supported 00:30:06.022 Variable Capacity Management: Not Supported 00:30:06.022 Delete Endurance Group: Not Supported 00:30:06.022 Delete NVM Set: Not Supported 00:30:06.022 Extended LBA Formats Supported: Not Supported 00:30:06.022 Flexible Data Placement Supported: Not Supported 00:30:06.022 00:30:06.022 Controller Memory Buffer Support 00:30:06.022 ================================ 00:30:06.022 Supported: No 00:30:06.022 00:30:06.022 Persistent Memory Region Support 00:30:06.022 ================================ 00:30:06.022 Supported: No 00:30:06.022 00:30:06.022 Admin Command Set Attributes 00:30:06.022 ============================ 00:30:06.022 Security Send/Receive: Not Supported 00:30:06.022 Format NVM: Not Supported 00:30:06.022 Firmware Activate/Download: Not Supported 00:30:06.022 Namespace Management: Not Supported 00:30:06.022 Device Self-Test: Not Supported 00:30:06.022 Directives: Not Supported 00:30:06.022 NVMe-MI: Not Supported 00:30:06.022 Virtualization Management: Not Supported 00:30:06.022 Doorbell Buffer Config: Not Supported 00:30:06.022 Get LBA Status Capability: Not Supported 00:30:06.022 Command & Feature Lockdown Capability: Not Supported 00:30:06.022 Abort Command Limit: 4 00:30:06.022 Async Event Request Limit: 4 00:30:06.022 Number of Firmware Slots: N/A 00:30:06.022 Firmware Slot 1 Read-Only: N/A 00:30:06.022 Firmware Activation Without Reset: N/A 00:30:06.022 Multiple Update Detection Support: N/A 00:30:06.022 Firmware Update Granularity: No Information Provided 00:30:06.022 Per-Namespace SMART Log: No 00:30:06.022 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.022 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:06.022 Command Effects Log Page: Supported 00:30:06.022 Get Log Page Extended Data: Supported 00:30:06.022 Telemetry Log Pages: Not Supported 00:30:06.022 Persistent Event Log Pages: Not Supported 00:30:06.022 Supported Log Pages Log Page: May Support 00:30:06.022 Commands Supported & Effects Log Page: Not Supported 00:30:06.022 Feature Identifiers & Effects Log Page:May Support 00:30:06.022 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.022 Data Area 4 for Telemetry Log: Not Supported 00:30:06.022 Error Log Page Entries Supported: 128 00:30:06.022 Keep Alive: Supported 00:30:06.022 Keep Alive Granularity: 10000 ms 00:30:06.022 00:30:06.022 NVM Command Set Attributes 00:30:06.023 ========================== 00:30:06.023 Submission Queue Entry Size 00:30:06.023 Max: 64 00:30:06.023 Min: 64 00:30:06.023 Completion Queue Entry Size 00:30:06.023 Max: 16 00:30:06.023 Min: 16 00:30:06.023 Number of Namespaces: 32 00:30:06.023 Compare Command: Supported 00:30:06.023 Write Uncorrectable Command: Not Supported 00:30:06.023 Dataset Management Command: Supported 00:30:06.023 Write Zeroes Command: Supported 00:30:06.023 Set Features Save Field: Not Supported 00:30:06.023 Reservations: Supported 00:30:06.023 Timestamp: Not Supported 00:30:06.023 Copy: Supported 00:30:06.023 Volatile Write Cache: Present 00:30:06.023 Atomic Write Unit (Normal): 1 00:30:06.023 Atomic Write Unit (PFail): 1 00:30:06.023 Atomic Compare & Write Unit: 1 00:30:06.023 Fused Compare & Write: Supported 00:30:06.023 Scatter-Gather List 00:30:06.023 SGL Command Set: Supported 00:30:06.023 SGL Keyed: Supported 00:30:06.023 SGL Bit Bucket Descriptor: Not Supported 00:30:06.023 SGL Metadata Pointer: Not Supported 00:30:06.023 Oversized SGL: Not Supported 00:30:06.023 SGL Metadata Address: Not Supported 00:30:06.023 SGL Offset: Supported 00:30:06.023 Transport SGL Data Block: Not Supported 00:30:06.023 Replay Protected Memory Block: Not Supported 00:30:06.023 00:30:06.023 Firmware Slot Information 00:30:06.023 ========================= 00:30:06.023 Active slot: 1 00:30:06.023 Slot 1 Firmware Revision: 25.01 00:30:06.023 00:30:06.023 00:30:06.023 Commands Supported and Effects 00:30:06.023 ============================== 00:30:06.023 Admin Commands 00:30:06.023 -------------- 00:30:06.023 Get Log Page (02h): Supported 00:30:06.023 Identify (06h): Supported 00:30:06.023 Abort (08h): Supported 00:30:06.023 Set Features (09h): Supported 00:30:06.023 Get Features (0Ah): Supported 00:30:06.023 Asynchronous Event Request (0Ch): Supported 00:30:06.023 Keep Alive (18h): Supported 00:30:06.023 I/O Commands 00:30:06.023 ------------ 00:30:06.023 Flush (00h): Supported LBA-Change 00:30:06.023 Write (01h): Supported LBA-Change 00:30:06.023 Read (02h): Supported 00:30:06.023 Compare (05h): Supported 00:30:06.023 Write Zeroes (08h): Supported LBA-Change 00:30:06.023 Dataset Management (09h): Supported LBA-Change 00:30:06.023 Copy (19h): Supported LBA-Change 00:30:06.023 00:30:06.023 Error Log 00:30:06.023 ========= 00:30:06.023 00:30:06.023 Arbitration 00:30:06.023 =========== 00:30:06.023 Arbitration Burst: 1 00:30:06.023 00:30:06.023 Power Management 00:30:06.023 ================ 00:30:06.023 Number of Power States: 1 00:30:06.023 Current Power State: Power State #0 00:30:06.023 Power State #0: 00:30:06.023 Max Power: 0.00 W 00:30:06.023 Non-Operational State: Operational 00:30:06.023 Entry Latency: Not Reported 00:30:06.023 Exit Latency: Not Reported 00:30:06.023 Relative Read Throughput: 0 00:30:06.023 Relative Read Latency: 0 00:30:06.023 Relative Write Throughput: 0 00:30:06.023 Relative Write Latency: 0 00:30:06.023 Idle Power: Not Reported 00:30:06.023 Active Power: Not Reported 00:30:06.023 Non-Operational Permissive Mode: Not Supported 00:30:06.023 00:30:06.023 Health Information 00:30:06.023 ================== 00:30:06.023 Critical Warnings: 00:30:06.023 Available Spare Space: OK 00:30:06.023 Temperature: OK 00:30:06.023 Device Reliability: OK 00:30:06.023 Read Only: No 00:30:06.023 Volatile Memory Backup: OK 00:30:06.023 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:06.023 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:06.023 Available Spare: 0% 00:30:06.023 Available Spare Threshold: 0% 00:30:06.023 Life Percentage Used:[2024-12-13 12:35:33.529589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.529593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcb1de0) 00:30:06.023 [2024-12-13 12:35:33.529599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.023 [2024-12-13 12:35:33.529610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d9c0, cid 7, qid 0 00:30:06.023 [2024-12-13 12:35:33.529680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.023 [2024-12-13 12:35:33.529685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.023 [2024-12-13 12:35:33.529688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.529692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d9c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.529719] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:06.023 [2024-12-13 12:35:33.529728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0cf40) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.529733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.023 [2024-12-13 12:35:33.529738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d0c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.529742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.023 [2024-12-13 12:35:33.529746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d240) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.529750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.023 [2024-12-13 12:35:33.529754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.529758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.023 [2024-12-13 12:35:33.529765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.529768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.529771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.023 [2024-12-13 12:35:33.529777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.023 [2024-12-13 12:35:33.533798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.023 [2024-12-13 12:35:33.533968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.023 [2024-12-13 12:35:33.533975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.023 [2024-12-13 12:35:33.533978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.533981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.533989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.533993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.533996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.023 [2024-12-13 12:35:33.534002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.023 [2024-12-13 12:35:33.534014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.023 [2024-12-13 12:35:33.534135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.023 [2024-12-13 12:35:33.534141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.023 [2024-12-13 12:35:33.534144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.534151] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:06.023 [2024-12-13 12:35:33.534155] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:06.023 [2024-12-13 12:35:33.534164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.023 [2024-12-13 12:35:33.534176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.023 [2024-12-13 12:35:33.534186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.023 [2024-12-13 12:35:33.534268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.023 [2024-12-13 12:35:33.534274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.023 [2024-12-13 12:35:33.534278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.534292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.023 [2024-12-13 12:35:33.534305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.023 [2024-12-13 12:35:33.534315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.023 [2024-12-13 12:35:33.534377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.023 [2024-12-13 12:35:33.534383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.023 [2024-12-13 12:35:33.534386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.023 [2024-12-13 12:35:33.534397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.023 [2024-12-13 12:35:33.534404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.023 [2024-12-13 12:35:33.534409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.023 [2024-12-13 12:35:33.534420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.023 [2024-12-13 12:35:33.534519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.534525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.534530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.534543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.534559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.534568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.534671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.534678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.534681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.534692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.534708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.534717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.534822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.534828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.534832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.534842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.534854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.534863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.534923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.534928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.534931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.534942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.534948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.534954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.534963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.535931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.535936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.535939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.535950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.535956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.535962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.535971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.536031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.536037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.024 [2024-12-13 12:35:33.536040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.536043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.024 [2024-12-13 12:35:33.536051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.536055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.024 [2024-12-13 12:35:33.536058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.024 [2024-12-13 12:35:33.536063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.024 [2024-12-13 12:35:33.536073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.024 [2024-12-13 12:35:33.539788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.024 [2024-12-13 12:35:33.539798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.025 [2024-12-13 12:35:33.539801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.025 [2024-12-13 12:35:33.539804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.025 [2024-12-13 12:35:33.539814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.025 [2024-12-13 12:35:33.539818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.025 [2024-12-13 12:35:33.539825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb1de0) 00:30:06.025 [2024-12-13 12:35:33.539831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.025 [2024-12-13 12:35:33.539843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd0d3c0, cid 3, qid 0 00:30:06.025 [2024-12-13 12:35:33.539970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.025 [2024-12-13 12:35:33.539976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.025 [2024-12-13 12:35:33.539979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.025 [2024-12-13 12:35:33.539982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd0d3c0) on tqpair=0xcb1de0 00:30:06.025 [2024-12-13 12:35:33.539988] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:30:06.025 0% 00:30:06.025 Data Units Read: 0 00:30:06.025 Data Units Written: 0 00:30:06.025 Host Read Commands: 0 00:30:06.025 Host Write Commands: 0 00:30:06.025 Controller Busy Time: 0 minutes 00:30:06.025 Power Cycles: 0 00:30:06.025 Power On Hours: 0 hours 00:30:06.025 Unsafe Shutdowns: 0 00:30:06.025 Unrecoverable Media Errors: 0 00:30:06.025 Lifetime Error Log Entries: 0 00:30:06.025 Warning Temperature Time: 0 minutes 00:30:06.025 Critical Temperature Time: 0 minutes 00:30:06.025 00:30:06.025 Number of Queues 00:30:06.025 ================ 00:30:06.025 Number of I/O Submission Queues: 127 00:30:06.025 Number of I/O Completion Queues: 127 00:30:06.025 00:30:06.025 Active Namespaces 00:30:06.025 ================= 00:30:06.025 Namespace ID:1 00:30:06.025 Error Recovery Timeout: Unlimited 00:30:06.025 Command Set Identifier: NVM (00h) 00:30:06.025 Deallocate: Supported 00:30:06.025 Deallocated/Unwritten Error: Not Supported 00:30:06.025 Deallocated Read Value: Unknown 00:30:06.025 Deallocate in Write Zeroes: Not Supported 00:30:06.025 Deallocated Guard Field: 0xFFFF 00:30:06.025 Flush: Supported 00:30:06.025 Reservation: Supported 00:30:06.025 Namespace Sharing Capabilities: Multiple Controllers 00:30:06.025 Size (in LBAs): 131072 (0GiB) 00:30:06.025 Capacity (in LBAs): 131072 (0GiB) 00:30:06.025 Utilization (in LBAs): 131072 (0GiB) 00:30:06.025 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:06.025 EUI64: ABCDEF0123456789 00:30:06.025 UUID: 7a32584c-f978-4f5b-a0d2-d8fb05573a0e 00:30:06.025 Thin Provisioning: Not Supported 00:30:06.025 Per-NS Atomic Units: Yes 00:30:06.025 Atomic Boundary Size (Normal): 0 00:30:06.025 Atomic Boundary Size (PFail): 0 00:30:06.025 Atomic Boundary Offset: 0 00:30:06.025 Maximum Single Source Range Length: 65535 00:30:06.025 Maximum Copy Length: 65535 00:30:06.025 Maximum Source Range Count: 1 00:30:06.025 NGUID/EUI64 Never Reused: No 00:30:06.025 Namespace Write Protected: No 00:30:06.025 Number of LBA Formats: 1 00:30:06.025 Current LBA Format: LBA Format #00 00:30:06.025 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:06.025 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.025 rmmod nvme_tcp 00:30:06.025 rmmod nvme_fabrics 00:30:06.025 rmmod nvme_keyring 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 449389 ']' 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 449389 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 449389 ']' 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 449389 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449389 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449389' 00:30:06.025 killing process with pid 449389 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 449389 00:30:06.025 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 449389 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.284 12:35:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.819 00:30:08.819 real 0m9.172s 00:30:08.819 user 0m4.985s 00:30:08.819 sys 0m4.721s 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:08.819 ************************************ 00:30:08.819 END TEST nvmf_identify 00:30:08.819 ************************************ 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.819 ************************************ 00:30:08.819 START TEST nvmf_perf 00:30:08.819 ************************************ 00:30:08.819 12:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:08.819 * Looking for test storage... 00:30:08.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.819 --rc genhtml_branch_coverage=1 00:30:08.819 --rc genhtml_function_coverage=1 00:30:08.819 --rc genhtml_legend=1 00:30:08.819 --rc geninfo_all_blocks=1 00:30:08.819 --rc geninfo_unexecuted_blocks=1 00:30:08.819 00:30:08.819 ' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.819 --rc genhtml_branch_coverage=1 00:30:08.819 --rc genhtml_function_coverage=1 00:30:08.819 --rc genhtml_legend=1 00:30:08.819 --rc geninfo_all_blocks=1 00:30:08.819 --rc geninfo_unexecuted_blocks=1 00:30:08.819 00:30:08.819 ' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.819 --rc genhtml_branch_coverage=1 00:30:08.819 --rc genhtml_function_coverage=1 00:30:08.819 --rc genhtml_legend=1 00:30:08.819 --rc geninfo_all_blocks=1 00:30:08.819 --rc geninfo_unexecuted_blocks=1 00:30:08.819 00:30:08.819 ' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.819 --rc genhtml_branch_coverage=1 00:30:08.819 --rc genhtml_function_coverage=1 00:30:08.819 --rc genhtml_legend=1 00:30:08.819 --rc geninfo_all_blocks=1 00:30:08.819 --rc geninfo_unexecuted_blocks=1 00:30:08.819 00:30:08.819 ' 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:08.819 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.820 12:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:14.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:14.095 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.095 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:14.096 Found net devices under 0000:af:00.0: cvl_0_0 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:14.096 Found net devices under 0000:af:00.1: cvl_0_1 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.096 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:30:14.355 00:30:14.355 --- 10.0.0.2 ping statistics --- 00:30:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.355 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:30:14.355 00:30:14.355 --- 10.0.0.1 ping statistics --- 00:30:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.355 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.355 12:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.355 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=452995 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 452995 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 452995 ']' 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.356 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.614 [2024-12-13 12:35:42.094842] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:14.615 [2024-12-13 12:35:42.094891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.615 [2024-12-13 12:35:42.174795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.615 [2024-12-13 12:35:42.198847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.615 [2024-12-13 12:35:42.198882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.615 [2024-12-13 12:35:42.198889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.615 [2024-12-13 12:35:42.198899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.615 [2024-12-13 12:35:42.198904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.615 [2024-12-13 12:35:42.200301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.615 [2024-12-13 12:35:42.200409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.615 [2024-12-13 12:35:42.200435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.615 [2024-12-13 12:35:42.200434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.615 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.615 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:14.615 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.615 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.615 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.874 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.874 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:14.874 12:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:18.162 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:18.421 [2024-12-13 12:35:45.934921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.421 12:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.680 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:18.680 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.938 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:18.938 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:18.938 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.196 [2024-12-13 12:35:46.759238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.196 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.454 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:19.454 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:19.454 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:19.454 12:35:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:20.830 Initializing NVMe Controllers 00:30:20.830 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:20.830 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:20.830 Initialization complete. Launching workers. 00:30:20.830 ======================================================== 00:30:20.830 Latency(us) 00:30:20.830 Device Information : IOPS MiB/s Average min max 00:30:20.830 PCIE (0000:5e:00.0) NSID 1 from core 0: 98178.17 383.51 325.41 32.90 5194.78 00:30:20.830 ======================================================== 00:30:20.830 Total : 98178.17 383.51 325.41 32.90 5194.78 00:30:20.830 00:30:20.830 12:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.204 Initializing NVMe Controllers 00:30:22.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:22.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:22.204 Initialization complete. Launching workers. 00:30:22.204 ======================================================== 00:30:22.204 Latency(us) 00:30:22.204 Device Information : IOPS MiB/s Average min max 00:30:22.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12228.49 108.54 44932.11 00:30:22.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 73.00 0.29 13968.28 6984.83 47901.07 00:30:22.204 ======================================================== 00:30:22.204 Total : 156.00 0.61 13042.62 108.54 47901.07 00:30:22.204 00:30:22.204 12:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.139 Initializing NVMe Controllers 00:30:23.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.140 Initialization complete. Launching workers. 00:30:23.140 ======================================================== 00:30:23.140 Latency(us) 00:30:23.140 Device Information : IOPS MiB/s Average min max 00:30:23.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11199.62 43.75 2861.79 373.35 10018.80 00:30:23.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3793.87 14.82 8469.18 4961.02 15734.72 00:30:23.140 ======================================================== 00:30:23.140 Total : 14993.49 58.57 4280.66 373.35 15734.72 00:30:23.140 00:30:23.140 12:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:23.140 12:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:23.140 12:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.424 Initializing NVMe Controllers 00:30:26.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.424 Controller IO queue size 128, less than required. 00:30:26.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.424 Controller IO queue size 128, less than required. 00:30:26.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:26.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:26.424 Initialization complete. Launching workers. 00:30:26.424 ======================================================== 00:30:26.424 Latency(us) 00:30:26.424 Device Information : IOPS MiB/s Average min max 00:30:26.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1831.97 457.99 70836.56 48893.35 110333.64 00:30:26.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.49 150.87 223354.65 79382.99 343502.07 00:30:26.424 ======================================================== 00:30:26.424 Total : 2435.46 608.86 108629.49 48893.35 343502.07 00:30:26.424 00:30:26.424 12:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:26.424 No valid NVMe controllers or AIO or URING devices found 00:30:26.424 Initializing NVMe Controllers 00:30:26.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.424 Controller IO queue size 128, less than required. 00:30:26.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.424 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:26.424 Controller IO queue size 128, less than required. 00:30:26.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.424 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:26.424 WARNING: Some requested NVMe devices were skipped 00:30:26.424 12:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:28.961 Initializing NVMe Controllers 00:30:28.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.961 Controller IO queue size 128, less than required. 00:30:28.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.961 Controller IO queue size 128, less than required. 00:30:28.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:28.961 Initialization complete. Launching workers. 00:30:28.961 00:30:28.961 ==================== 00:30:28.961 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:28.961 TCP transport: 00:30:28.961 polls: 11644 00:30:28.961 idle_polls: 8136 00:30:28.961 sock_completions: 3508 00:30:28.961 nvme_completions: 6283 00:30:28.961 submitted_requests: 9348 00:30:28.961 queued_requests: 1 00:30:28.961 00:30:28.961 ==================== 00:30:28.961 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:28.961 TCP transport: 00:30:28.961 polls: 15671 00:30:28.961 idle_polls: 11555 00:30:28.961 sock_completions: 4116 00:30:28.961 nvme_completions: 6737 00:30:28.961 submitted_requests: 10024 00:30:28.961 queued_requests: 1 00:30:28.961 ======================================================== 00:30:28.961 Latency(us) 00:30:28.961 Device Information : IOPS MiB/s Average min max 00:30:28.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1570.50 392.62 83613.04 55227.54 136737.80 00:30:28.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1684.00 421.00 76542.50 49091.32 119309.57 00:30:28.961 ======================================================== 00:30:28.961 Total : 3254.50 813.62 79954.47 49091.32 136737.80 00:30:28.961 00:30:28.961 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:28.961 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.220 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:29.220 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:29.220 12:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0165d5a2-a203-4a5c-999a-29be5d6a4d94 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0165d5a2-a203-4a5c-999a-29be5d6a4d94 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0165d5a2-a203-4a5c-999a-29be5d6a4d94 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:32.509 12:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.509 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:32.509 { 00:30:32.509 "uuid": "0165d5a2-a203-4a5c-999a-29be5d6a4d94", 00:30:32.509 "name": "lvs_0", 00:30:32.509 "base_bdev": "Nvme0n1", 00:30:32.509 "total_data_clusters": 238234, 00:30:32.509 "free_clusters": 238234, 00:30:32.509 "block_size": 512, 00:30:32.509 "cluster_size": 4194304 00:30:32.509 } 00:30:32.509 ]' 00:30:32.509 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0165d5a2-a203-4a5c-999a-29be5d6a4d94") .free_clusters' 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0165d5a2-a203-4a5c-999a-29be5d6a4d94") .cluster_size' 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:32.768 952936 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:32.768 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0165d5a2-a203-4a5c-999a-29be5d6a4d94 lbd_0 20480 00:30:33.336 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9e03565a-0103-4a80-86a0-a71a946e7aaf 00:30:33.336 12:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9e03565a-0103-4a80-86a0-a71a946e7aaf lvs_n_0 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=3220c69f-08ed-4e72-8462-0c4f6baea197 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 3220c69f-08ed-4e72-8462-0c4f6baea197 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3220c69f-08ed-4e72-8462-0c4f6baea197 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:33.903 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:34.163 { 00:30:34.163 "uuid": "0165d5a2-a203-4a5c-999a-29be5d6a4d94", 00:30:34.163 "name": "lvs_0", 00:30:34.163 "base_bdev": "Nvme0n1", 00:30:34.163 "total_data_clusters": 238234, 00:30:34.163 "free_clusters": 233114, 00:30:34.163 "block_size": 512, 00:30:34.163 "cluster_size": 4194304 00:30:34.163 }, 00:30:34.163 { 00:30:34.163 "uuid": "3220c69f-08ed-4e72-8462-0c4f6baea197", 00:30:34.163 "name": "lvs_n_0", 00:30:34.163 "base_bdev": "9e03565a-0103-4a80-86a0-a71a946e7aaf", 00:30:34.163 "total_data_clusters": 5114, 00:30:34.163 "free_clusters": 5114, 00:30:34.163 "block_size": 512, 00:30:34.163 "cluster_size": 4194304 00:30:34.163 } 00:30:34.163 ]' 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3220c69f-08ed-4e72-8462-0c4f6baea197") .free_clusters' 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3220c69f-08ed-4e72-8462-0c4f6baea197") .cluster_size' 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:34.163 20456 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:34.163 12:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3220c69f-08ed-4e72-8462-0c4f6baea197 lbd_nest_0 20456 00:30:34.422 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0dc7f609-ee0e-4775-a555-5808ad9b5521 00:30:34.422 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.681 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:34.681 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0dc7f609-ee0e-4775-a555-5808ad9b5521 00:30:34.940 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.940 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:34.940 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:34.940 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:34.940 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:34.940 12:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.149 Initializing NVMe Controllers 00:30:47.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:47.149 Initialization complete. Launching workers. 00:30:47.149 ======================================================== 00:30:47.149 Latency(us) 00:30:47.149 Device Information : IOPS MiB/s Average min max 00:30:47.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.40 0.02 22589.50 123.32 45713.74 00:30:47.149 ======================================================== 00:30:47.149 Total : 44.40 0.02 22589.50 123.32 45713.74 00:30:47.149 00:30:47.149 12:36:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:47.149 12:36:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.126 Initializing NVMe Controllers 00:30:57.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.126 Initialization complete. Launching workers. 00:30:57.126 ======================================================== 00:30:57.126 Latency(us) 00:30:57.126 Device Information : IOPS MiB/s Average min max 00:30:57.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.47 8.93 13990.26 5031.63 50876.04 00:30:57.126 ======================================================== 00:30:57.126 Total : 71.47 8.93 13990.26 5031.63 50876.04 00:30:57.126 00:30:57.126 12:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:57.126 12:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.126 12:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.099 Initializing NVMe Controllers 00:31:07.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.099 Initialization complete. Launching workers. 00:31:07.099 ======================================================== 00:31:07.099 Latency(us) 00:31:07.099 Device Information : IOPS MiB/s Average min max 00:31:07.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8727.30 4.26 3667.85 229.91 9494.78 00:31:07.099 ======================================================== 00:31:07.099 Total : 8727.30 4.26 3667.85 229.91 9494.78 00:31:07.099 00:31:07.099 12:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:07.099 12:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.071 Initializing NVMe Controllers 00:31:17.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.071 Initialization complete. Launching workers. 00:31:17.071 ======================================================== 00:31:17.071 Latency(us) 00:31:17.071 Device Information : IOPS MiB/s Average min max 00:31:17.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4373.39 546.67 7317.83 542.69 17954.55 00:31:17.071 ======================================================== 00:31:17.071 Total : 4373.39 546.67 7317.83 542.69 17954.55 00:31:17.071 00:31:17.071 12:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:17.071 12:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:17.071 12:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:27.046 Initializing NVMe Controllers 00:31:27.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.046 Controller IO queue size 128, less than required. 00:31:27.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.046 Initialization complete. Launching workers. 00:31:27.046 ======================================================== 00:31:27.046 Latency(us) 00:31:27.046 Device Information : IOPS MiB/s Average min max 00:31:27.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15820.33 7.72 8090.85 1368.83 22886.76 00:31:27.046 ======================================================== 00:31:27.046 Total : 15820.33 7.72 8090.85 1368.83 22886.76 00:31:27.046 00:31:27.046 12:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:27.046 12:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.252 Initializing NVMe Controllers 00:31:39.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.252 Controller IO queue size 128, less than required. 00:31:39.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.252 Initialization complete. Launching workers. 00:31:39.252 ======================================================== 00:31:39.252 Latency(us) 00:31:39.252 Device Information : IOPS MiB/s Average min max 00:31:39.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.05 149.38 107488.94 16088.66 231700.64 00:31:39.252 ======================================================== 00:31:39.252 Total : 1195.05 149.38 107488.94 16088.66 231700.64 00:31:39.252 00:31:39.252 12:37:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.252 12:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0dc7f609-ee0e-4775-a555-5808ad9b5521 00:31:39.252 12:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9e03565a-0103-4a80-86a0-a71a946e7aaf 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.252 rmmod nvme_tcp 00:31:39.252 rmmod nvme_fabrics 00:31:39.252 rmmod nvme_keyring 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 452995 ']' 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 452995 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 452995 ']' 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 452995 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452995 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452995' 00:31:39.252 killing process with pid 452995 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 452995 00:31:39.252 12:37:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 452995 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.629 12:37:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.531 00:31:42.531 real 1m34.113s 00:31:42.531 user 5m35.698s 00:31:42.531 sys 0m17.388s 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:42.531 ************************************ 00:31:42.531 END TEST nvmf_perf 00:31:42.531 ************************************ 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.531 ************************************ 00:31:42.531 START TEST nvmf_fio_host 00:31:42.531 ************************************ 00:31:42.531 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.790 * Looking for test storage... 00:31:42.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.790 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.791 --rc genhtml_branch_coverage=1 00:31:42.791 --rc genhtml_function_coverage=1 00:31:42.791 --rc genhtml_legend=1 00:31:42.791 --rc geninfo_all_blocks=1 00:31:42.791 --rc geninfo_unexecuted_blocks=1 00:31:42.791 00:31:42.791 ' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.791 --rc genhtml_branch_coverage=1 00:31:42.791 --rc genhtml_function_coverage=1 00:31:42.791 --rc genhtml_legend=1 00:31:42.791 --rc geninfo_all_blocks=1 00:31:42.791 --rc geninfo_unexecuted_blocks=1 00:31:42.791 00:31:42.791 ' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.791 --rc genhtml_branch_coverage=1 00:31:42.791 --rc genhtml_function_coverage=1 00:31:42.791 --rc genhtml_legend=1 00:31:42.791 --rc geninfo_all_blocks=1 00:31:42.791 --rc geninfo_unexecuted_blocks=1 00:31:42.791 00:31:42.791 ' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.791 --rc genhtml_branch_coverage=1 00:31:42.791 --rc genhtml_function_coverage=1 00:31:42.791 --rc genhtml_legend=1 00:31:42.791 --rc geninfo_all_blocks=1 00:31:42.791 --rc geninfo_unexecuted_blocks=1 00:31:42.791 00:31:42.791 ' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.791 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:42.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.792 12:37:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:49.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:49.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:49.356 Found net devices under 0000:af:00.0: cvl_0_0 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:49.356 Found net devices under 0000:af:00.1: cvl_0_1 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.356 12:37:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.356 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:31:49.356 00:31:49.356 --- 10.0.0.2 ping statistics --- 00:31:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.357 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:31:49.357 00:31:49.357 --- 10.0.0.1 ping statistics --- 00:31:49.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.357 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=470495 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 470495 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 470495 ']' 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.357 [2024-12-13 12:37:16.307997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:49.357 [2024-12-13 12:37:16.308042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.357 [2024-12-13 12:37:16.382851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:49.357 [2024-12-13 12:37:16.406520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.357 [2024-12-13 12:37:16.406556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.357 [2024-12-13 12:37:16.406564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.357 [2024-12-13 12:37:16.406570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.357 [2024-12-13 12:37:16.406575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.357 [2024-12-13 12:37:16.407887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.357 [2024-12-13 12:37:16.407996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.357 [2024-12-13 12:37:16.408100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.357 [2024-12-13 12:37:16.408102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.357 [2024-12-13 12:37:16.669209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:49.357 Malloc1 00:31:49.357 12:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.616 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:49.874 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.874 [2024-12-13 12:37:17.545127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.874 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.132 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:50.133 12:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.699 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:50.699 fio-3.35 00:31:50.699 Starting 1 thread 00:31:53.230 00:31:53.230 test: (groupid=0, jobs=1): err= 0: pid=470969: Fri Dec 13 12:37:20 2024 00:31:53.230 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:31:53.231 slat (nsec): min=1523, max=240144, avg=1677.13, stdev=2232.06 00:31:53.231 clat (usec): min=2920, max=10515, avg=5938.04, stdev=438.01 00:31:53.231 lat (usec): min=2945, max=10516, avg=5939.72, stdev=437.80 00:31:53.231 clat percentiles (usec): 00:31:53.231 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:31:53.231 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:31:53.231 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:31:53.231 | 99.00th=[ 6915], 99.50th=[ 6980], 99.90th=[ 8225], 99.95th=[ 9241], 00:31:53.231 | 99.99th=[10159] 00:31:53.231 bw ( KiB/s): min=46408, max=48168, per=99.96%, avg=47540.00, stdev=813.97, samples=4 00:31:53.231 iops : min=11602, max=12042, avg=11885.00, stdev=203.49, samples=4 00:31:53.231 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:31:53.231 slat (nsec): min=1557, max=240656, avg=1746.68, stdev=1705.29 00:31:53.231 clat (usec): min=2436, max=9257, avg=4786.00, stdev=364.65 00:31:53.231 lat (usec): min=2452, max=9258, avg=4787.74, stdev=364.54 00:31:53.231 clat percentiles (usec): 00:31:53.231 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:31:53.231 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:31:53.231 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:31:53.231 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 7439], 99.95th=[ 8717], 00:31:53.231 | 99.99th=[ 9241] 00:31:53.231 bw ( KiB/s): min=46976, max=47872, per=100.00%, avg=47346.00, stdev=442.36, samples=4 00:31:53.231 iops : min=11744, max=11968, avg=11836.50, stdev=110.59, samples=4 00:31:53.231 lat (msec) : 4=0.69%, 10=99.31%, 20=0.01% 00:31:53.231 cpu : usr=72.55%, sys=26.45%, ctx=103, majf=0, minf=3 00:31:53.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:53.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.231 issued rwts: total=23839,23731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.231 00:31:53.231 Run status group 0 (all jobs): 00:31:53.231 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:31:53.231 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:53.231 12:37:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:53.231 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:53.231 fio-3.35 00:31:53.231 Starting 1 thread 00:31:55.763 00:31:55.764 test: (groupid=0, jobs=1): err= 0: pid=471422: Fri Dec 13 12:37:23 2024 00:31:55.764 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(340MiB/2005msec) 00:31:55.764 slat (nsec): min=2502, max=86137, avg=2833.51, stdev=1317.06 00:31:55.764 clat (usec): min=1541, max=49864, avg=6882.73, stdev=3371.69 00:31:55.764 lat (usec): min=1544, max=49867, avg=6885.56, stdev=3371.76 00:31:55.764 clat percentiles (usec): 00:31:55.764 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:31:55.764 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7111], 00:31:55.764 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9372], 00:31:55.764 | 99.00th=[10945], 99.50th=[43254], 99.90th=[48497], 99.95th=[49021], 00:31:55.764 | 99.99th=[49546] 00:31:55.764 bw ( KiB/s): min=72864, max=98240, per=50.36%, avg=87512.00, stdev=10908.81, samples=4 00:31:55.764 iops : min= 4554, max= 6140, avg=5469.50, stdev=681.80, samples=4 00:31:55.764 write: IOPS=6605, BW=103MiB/s (108MB/s)(179MiB/1734msec); 0 zone resets 00:31:55.764 slat (usec): min=28, max=379, avg=31.80, stdev= 7.61 00:31:55.764 clat (usec): min=2943, max=14948, avg=8641.58, stdev=1446.41 00:31:55.764 lat (usec): min=2973, max=15064, avg=8673.37, stdev=1447.84 00:31:55.764 clat percentiles (usec): 00:31:55.764 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:31:55.764 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8848], 00:31:55.764 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:31:55.764 | 99.00th=[12518], 99.50th=[12911], 99.90th=[14353], 99.95th=[14615], 00:31:55.764 | 99.99th=[14877] 00:31:55.764 bw ( KiB/s): min=77024, max=101600, per=86.43%, avg=91344.00, stdev=10568.74, samples=4 00:31:55.764 iops : min= 4814, max= 6350, avg=5709.00, stdev=660.55, samples=4 00:31:55.764 lat (msec) : 2=0.02%, 4=1.91%, 10=90.29%, 20=7.39%, 50=0.38% 00:31:55.764 cpu : usr=85.33%, sys=14.02%, ctx=43, majf=0, minf=3 00:31:55.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:55.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.764 issued rwts: total=21775,11454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.764 00:31:55.764 Run status group 0 (all jobs): 00:31:55.764 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2005-2005msec 00:31:55.764 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=179MiB (188MB), run=1734-1734msec 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:55.764 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:56.022 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:56.022 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:56.022 12:37:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:59.306 Nvme0n1 00:31:59.306 12:37:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=dada15f2-851e-4c44-b727-5385ebae97af 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb dada15f2-851e-4c44-b727-5385ebae97af 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=dada15f2-851e-4c44-b727-5385ebae97af 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:01.838 { 00:32:01.838 "uuid": "dada15f2-851e-4c44-b727-5385ebae97af", 00:32:01.838 "name": "lvs_0", 00:32:01.838 "base_bdev": "Nvme0n1", 00:32:01.838 "total_data_clusters": 930, 00:32:01.838 "free_clusters": 930, 00:32:01.838 "block_size": 512, 00:32:01.838 "cluster_size": 1073741824 00:32:01.838 } 00:32:01.838 ]' 00:32:01.838 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="dada15f2-851e-4c44-b727-5385ebae97af") .free_clusters' 00:32:02.096 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:02.096 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="dada15f2-851e-4c44-b727-5385ebae97af") .cluster_size' 00:32:02.096 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:02.096 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:02.096 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:02.096 952320 00:32:02.097 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:02.355 667c1c57-352b-4e32-ba60-debf0d45c8e7 00:32:02.355 12:37:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:02.613 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:02.872 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:03.158 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.158 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.158 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:03.159 12:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.420 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:03.420 fio-3.35 00:32:03.420 Starting 1 thread 00:32:05.943 00:32:05.943 test: (groupid=0, jobs=1): err= 0: pid=473207: Fri Dec 13 12:37:33 2024 00:32:05.943 read: IOPS=8074, BW=31.5MiB/s (33.1MB/s)(63.3MiB/2006msec) 00:32:05.943 slat (nsec): min=1513, max=86840, avg=1639.61, stdev=1030.61 00:32:05.943 clat (usec): min=904, max=169839, avg=8707.96, stdev=10255.41 00:32:05.943 lat (usec): min=905, max=169858, avg=8709.60, stdev=10255.55 00:32:05.943 clat percentiles (msec): 00:32:05.943 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:05.943 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:32:05.943 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:32:05.943 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:32:05.943 | 99.99th=[ 169] 00:32:05.943 bw ( KiB/s): min=22952, max=35504, per=99.96%, avg=32286.00, stdev=6223.40, samples=4 00:32:05.943 iops : min= 5738, max= 8876, avg=8071.50, stdev=1555.85, samples=4 00:32:05.943 write: IOPS=8066, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2006msec); 0 zone resets 00:32:05.943 slat (nsec): min=1561, max=74999, avg=1705.60, stdev=665.69 00:32:05.943 clat (usec): min=171, max=168424, avg=7057.80, stdev=9580.79 00:32:05.943 lat (usec): min=172, max=168429, avg=7059.51, stdev=9580.94 00:32:05.943 clat percentiles (msec): 00:32:05.943 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:32:05.943 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:05.943 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:32:05.943 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:32:05.943 | 99.99th=[ 169] 00:32:05.943 bw ( KiB/s): min=23872, max=35080, per=99.85%, avg=32220.00, stdev=5565.72, samples=4 00:32:05.943 iops : min= 5968, max= 8770, avg=8055.00, stdev=1391.43, samples=4 00:32:05.943 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:05.943 lat (msec) : 2=0.05%, 4=0.24%, 10=99.12%, 20=0.17%, 250=0.40% 00:32:05.943 cpu : usr=70.82%, sys=28.33%, ctx=142, majf=0, minf=3 00:32:05.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:05.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:05.943 issued rwts: total=16198,16182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:05.943 00:32:05.943 Run status group 0 (all jobs): 00:32:05.943 READ: bw=31.5MiB/s (33.1MB/s), 31.5MiB/s-31.5MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.3MB), run=2006-2006msec 00:32:05.943 WRITE: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.3MB), run=2006-2006msec 00:32:05.943 12:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:05.943 12:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=cbca1633-5690-4f5c-af7d-203a46a00ad0 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb cbca1633-5690-4f5c-af7d-203a46a00ad0 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=cbca1633-5690-4f5c-af7d-203a46a00ad0 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:06.872 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:07.129 { 00:32:07.129 "uuid": "dada15f2-851e-4c44-b727-5385ebae97af", 00:32:07.129 "name": "lvs_0", 00:32:07.129 "base_bdev": "Nvme0n1", 00:32:07.129 "total_data_clusters": 930, 00:32:07.129 "free_clusters": 0, 00:32:07.129 "block_size": 512, 00:32:07.129 "cluster_size": 1073741824 00:32:07.129 }, 00:32:07.129 { 00:32:07.129 "uuid": "cbca1633-5690-4f5c-af7d-203a46a00ad0", 00:32:07.129 "name": "lvs_n_0", 00:32:07.129 "base_bdev": "667c1c57-352b-4e32-ba60-debf0d45c8e7", 00:32:07.129 "total_data_clusters": 237847, 00:32:07.129 "free_clusters": 237847, 00:32:07.129 "block_size": 512, 00:32:07.129 "cluster_size": 4194304 00:32:07.129 } 00:32:07.129 ]' 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="cbca1633-5690-4f5c-af7d-203a46a00ad0") .free_clusters' 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="cbca1633-5690-4f5c-af7d-203a46a00ad0") .cluster_size' 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:07.129 951388 00:32:07.129 12:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:07.691 8af199f7-b931-4406-933d-b3f5788be31a 00:32:07.691 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:07.947 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:08.203 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.460 12:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:08.460 12:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.460 12:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:08.460 12:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:08.460 12:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:08.460 12:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:08.717 fio-3.35 00:32:08.717 Starting 1 thread 00:32:11.236 00:32:11.236 test: (groupid=0, jobs=1): err= 0: pid=474137: Fri Dec 13 12:37:38 2024 00:32:11.236 read: IOPS=7849, BW=30.7MiB/s (32.2MB/s)(61.5MiB/2006msec) 00:32:11.236 slat (nsec): min=1497, max=88661, avg=1684.64, stdev=1047.35 00:32:11.236 clat (usec): min=3072, max=14747, avg=8952.31, stdev=813.71 00:32:11.236 lat (usec): min=3076, max=14749, avg=8954.00, stdev=813.65 00:32:11.236 clat percentiles (usec): 00:32:11.236 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8356], 00:32:11.236 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:11.236 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:32:11.236 | 99.00th=[10683], 99.50th=[10945], 99.90th=[13698], 99.95th=[14615], 00:32:11.236 | 99.99th=[14746] 00:32:11.236 bw ( KiB/s): min=30200, max=31896, per=99.84%, avg=31350.00, stdev=784.61, samples=4 00:32:11.236 iops : min= 7550, max= 7974, avg=7837.50, stdev=196.15, samples=4 00:32:11.236 write: IOPS=7825, BW=30.6MiB/s (32.1MB/s)(61.3MiB/2006msec); 0 zone resets 00:32:11.236 slat (nsec): min=1524, max=72243, avg=1750.68, stdev=678.26 00:32:11.236 clat (usec): min=1377, max=12086, avg=7290.97, stdev=641.83 00:32:11.236 lat (usec): min=1382, max=12088, avg=7292.72, stdev=641.80 00:32:11.236 clat percentiles (usec): 00:32:11.236 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:32:11.236 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:32:11.236 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:32:11.236 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[ 9634], 99.95th=[10814], 00:32:11.236 | 99.99th=[12125] 00:32:11.236 bw ( KiB/s): min=31232, max=31360, per=99.95%, avg=31284.00, stdev=57.50, samples=4 00:32:11.236 iops : min= 7808, max= 7840, avg=7821.00, stdev=14.38, samples=4 00:32:11.236 lat (msec) : 2=0.01%, 4=0.11%, 10=95.86%, 20=4.02% 00:32:11.236 cpu : usr=72.27%, sys=26.78%, ctx=105, majf=0, minf=3 00:32:11.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:11.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.236 issued rwts: total=15747,15697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.236 00:32:11.236 Run status group 0 (all jobs): 00:32:11.236 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.5MiB (64.5MB), run=2006-2006msec 00:32:11.236 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.3MiB (64.3MB), run=2006-2006msec 00:32:11.236 12:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:11.236 12:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:11.236 12:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:15.408 12:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:15.408 12:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:17.928 12:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:18.184 12:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.079 rmmod nvme_tcp 00:32:20.079 rmmod nvme_fabrics 00:32:20.079 rmmod nvme_keyring 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 470495 ']' 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 470495 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 470495 ']' 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 470495 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470495 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470495' 00:32:20.079 killing process with pid 470495 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 470495 00:32:20.079 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 470495 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.338 12:37:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.876 12:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.876 00:32:22.876 real 0m39.787s 00:32:22.876 user 2m39.880s 00:32:22.876 sys 0m8.856s 00:32:22.876 12:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.876 12:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.876 ************************************ 00:32:22.876 END TEST nvmf_fio_host 00:32:22.876 ************************************ 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.876 ************************************ 00:32:22.876 START TEST nvmf_failover 00:32:22.876 ************************************ 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:22.876 * Looking for test storage... 00:32:22.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.876 --rc genhtml_branch_coverage=1 00:32:22.876 --rc genhtml_function_coverage=1 00:32:22.876 --rc genhtml_legend=1 00:32:22.876 --rc geninfo_all_blocks=1 00:32:22.876 --rc geninfo_unexecuted_blocks=1 00:32:22.876 00:32:22.876 ' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.876 --rc genhtml_branch_coverage=1 00:32:22.876 --rc genhtml_function_coverage=1 00:32:22.876 --rc genhtml_legend=1 00:32:22.876 --rc geninfo_all_blocks=1 00:32:22.876 --rc geninfo_unexecuted_blocks=1 00:32:22.876 00:32:22.876 ' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.876 --rc genhtml_branch_coverage=1 00:32:22.876 --rc genhtml_function_coverage=1 00:32:22.876 --rc genhtml_legend=1 00:32:22.876 --rc geninfo_all_blocks=1 00:32:22.876 --rc geninfo_unexecuted_blocks=1 00:32:22.876 00:32:22.876 ' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.876 --rc genhtml_branch_coverage=1 00:32:22.876 --rc genhtml_function_coverage=1 00:32:22.876 --rc genhtml_legend=1 00:32:22.876 --rc geninfo_all_blocks=1 00:32:22.876 --rc geninfo_unexecuted_blocks=1 00:32:22.876 00:32:22.876 ' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.876 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:22.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.877 12:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.153 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:28.154 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:28.154 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:28.154 Found net devices under 0000:af:00.0: cvl_0_0 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:28.154 Found net devices under 0000:af:00.1: cvl_0_1 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.154 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.414 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.414 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.414 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.414 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.414 12:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:32:28.414 00:32:28.414 --- 10.0.0.2 ping statistics --- 00:32:28.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.414 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:28.414 00:32:28.414 --- 10.0.0.1 ping statistics --- 00:32:28.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.414 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=479376 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 479376 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479376 ']' 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.414 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.673 [2024-12-13 12:37:56.120379] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:28.673 [2024-12-13 12:37:56.120419] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.673 [2024-12-13 12:37:56.196994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:28.673 [2024-12-13 12:37:56.219095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.673 [2024-12-13 12:37:56.219131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.673 [2024-12-13 12:37:56.219138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.673 [2024-12-13 12:37:56.219144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.673 [2024-12-13 12:37:56.219149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.673 [2024-12-13 12:37:56.220411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.673 [2024-12-13 12:37:56.220523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.673 [2024-12-13 12:37:56.220522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.673 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:28.931 [2024-12-13 12:37:56.512724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.931 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:29.189 Malloc0 00:32:29.189 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.447 12:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:29.704 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.704 [2024-12-13 12:37:57.327527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.704 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:29.961 [2024-12-13 12:37:57.516045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:29.961 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:30.220 [2024-12-13 12:37:57.696618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=479632 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 479632 /var/tmp/bdevperf.sock 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479632 ']' 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:30.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.220 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:30.478 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.478 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:30.478 12:37:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:30.736 NVMe0n1 00:32:30.736 12:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:30.994 00:32:30.994 12:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=479852 00:32:30.994 12:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:30.994 12:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:31.928 12:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.187 [2024-12-13 12:37:59.724219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.187 [2024-12-13 12:37:59.724434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 [2024-12-13 12:37:59.724758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65aa0 is same with the state(6) to be set 00:32:32.188 12:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:35.475 12:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:35.475 00:32:35.475 12:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:35.733 [2024-12-13 12:38:03.266758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66fe0 is same with the state(6) to be set 00:32:35.733 [2024-12-13 12:38:03.266802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66fe0 is same with the state(6) to be set 00:32:35.733 [2024-12-13 12:38:03.266810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66fe0 is same with the state(6) to be set 00:32:35.733 [2024-12-13 12:38:03.266817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66fe0 is same with the state(6) to be set 00:32:35.733 [2024-12-13 12:38:03.266823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf66fe0 is same with the state(6) to be set 00:32:35.733 12:38:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:39.013 12:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.013 [2024-12-13 12:38:06.480921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.013 12:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:39.947 12:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:40.206 [2024-12-13 12:38:07.692925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.692996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 [2024-12-13 12:38:07.693123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67ea0 is same with the state(6) to be set 00:32:40.206 12:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 479852 00:32:46.772 { 00:32:46.772 "results": [ 00:32:46.772 { 00:32:46.772 "job": "NVMe0n1", 00:32:46.772 "core_mask": "0x1", 00:32:46.772 "workload": "verify", 00:32:46.772 "status": "finished", 00:32:46.772 "verify_range": { 00:32:46.772 "start": 0, 00:32:46.772 "length": 16384 00:32:46.772 }, 00:32:46.772 "queue_depth": 128, 00:32:46.772 "io_size": 4096, 00:32:46.772 "runtime": 15.006766, 00:32:46.772 "iops": 11175.292531382178, 00:32:46.772 "mibps": 43.65348645071163, 00:32:46.772 "io_failed": 9565, 00:32:46.772 "io_timeout": 0, 00:32:46.772 "avg_latency_us": 10813.926722314898, 00:32:46.772 "min_latency_us": 417.40190476190475, 00:32:46.772 "max_latency_us": 31207.619047619046 00:32:46.772 } 00:32:46.772 ], 00:32:46.772 "core_count": 1 00:32:46.772 } 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 479632 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479632 ']' 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479632 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479632 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479632' 00:32:46.772 killing process with pid 479632 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479632 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479632 00:32:46.772 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:46.772 [2024-12-13 12:37:57.746849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:46.772 [2024-12-13 12:37:57.746901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479632 ] 00:32:46.772 [2024-12-13 12:37:57.817413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.772 [2024-12-13 12:37:57.840007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.772 Running I/O for 15 seconds... 00:32:46.772 11157.00 IOPS, 43.58 MiB/s [2024-12-13T11:38:14.472Z] [2024-12-13 12:37:59.725329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.772 [2024-12-13 12:37:59.725776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.772 [2024-12-13 12:37:59.725791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.725989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.773 [2024-12-13 12:37:59.726099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.773 [2024-12-13 12:37:59.726358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.773 [2024-12-13 12:37:59.726366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.774 [2024-12-13 12:37:59.726444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.774 [2024-12-13 12:37:59.726909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.774 [2024-12-13 12:37:59.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99680 len:8 PRP1 0x0 PRP2 0x0 00:32:46.774 [2024-12-13 12:37:59.726943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.774 [2024-12-13 12:37:59.726952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.726957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.726963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.726970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.726976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.726981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.726986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.726992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.726999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99704 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99712 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99720 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99728 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99736 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99744 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99752 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99760 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99768 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99776 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99784 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99792 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99800 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.727316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.727322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.727328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.727333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.739676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.739690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.739700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.739706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.739720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.739728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.739734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.739741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.739752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.739761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.739767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.739774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.739787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.739795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.739801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.739808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.739816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.739824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.775 [2024-12-13 12:37:59.739830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.775 [2024-12-13 12:37:59.739836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:32:46.775 [2024-12-13 12:37:59.739844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.775 [2024-12-13 12:37:59.739892] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:46.775 [2024-12-13 12:37:59.739917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.775 [2024-12-13 12:37:59.739926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:37:59.739936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.776 [2024-12-13 12:37:59.739943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:37:59.739952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.776 [2024-12-13 12:37:59.739960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:37:59.739968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.776 [2024-12-13 12:37:59.739976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:37:59.739984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:46.776 [2024-12-13 12:37:59.740023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a73a0 (9): Bad file descriptor 00:32:46.776 [2024-12-13 12:37:59.743541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:46.776 [2024-12-13 12:37:59.895105] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:46.776 10349.00 IOPS, 40.43 MiB/s [2024-12-13T11:38:14.476Z] 10713.33 IOPS, 41.85 MiB/s [2024-12-13T11:38:14.476Z] 10900.00 IOPS, 42.58 MiB/s [2024-12-13T11:38:14.476Z] [2024-12-13 12:38:03.268473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.776 [2024-12-13 12:38:03.268507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.776 [2024-12-13 12:38:03.268534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.776 [2024-12-13 12:38:03.268549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.776 [2024-12-13 12:38:03.268564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.776 [2024-12-13 12:38:03.268579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.776 [2024-12-13 12:38:03.268595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.776 [2024-12-13 12:38:03.268931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.776 [2024-12-13 12:38:03.268937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.268945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.268951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.268960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.268966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.268974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.268980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.268989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.268995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.777 [2024-12-13 12:38:03.269293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.777 [2024-12-13 12:38:03.269307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.777 [2024-12-13 12:38:03.269514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.777 [2024-12-13 12:38:03.269520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.778 [2024-12-13 12:38:03.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.778 [2024-12-13 12:38:03.269603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.778 [2024-12-13 12:38:03.269616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.778 [2024-12-13 12:38:03.269629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.778 [2024-12-13 12:38:03.269644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a73a0 is same with the state(6) to be set 00:32:46.778 [2024-12-13 12:38:03.269790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72480 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72488 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72496 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72504 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72512 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72520 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72528 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.269978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.269988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.269995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.269999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72544 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72552 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72560 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72568 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72576 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72584 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72592 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72600 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72608 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.778 [2024-12-13 12:38:03.270232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.778 [2024-12-13 12:38:03.270237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.778 [2024-12-13 12:38:03.270243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:32:46.778 [2024-12-13 12:38:03.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72632 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72640 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72648 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72656 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72664 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72672 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72680 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72688 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72696 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72704 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72712 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72720 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72728 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72736 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72744 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72752 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72760 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72768 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72776 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72784 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.270711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.270715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.270720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72792 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.270727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.281054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.281068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.281077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.281086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.281095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.281101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.779 [2024-12-13 12:38:03.281108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71848 len:8 PRP1 0x0 PRP2 0x0 00:32:46.779 [2024-12-13 12:38:03.281116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.779 [2024-12-13 12:38:03.281125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.779 [2024-12-13 12:38:03.281132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71856 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71864 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71872 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71880 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71888 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71896 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71904 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71912 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71920 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71928 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71936 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71944 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71952 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71960 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71776 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71784 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71792 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71800 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71808 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71816 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-13 12:38:03.281798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.780 [2024-12-13 12:38:03.281804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.780 [2024-12-13 12:38:03.281811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:32:46.780 [2024-12-13 12:38:03.281819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.281829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.281835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.281842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.281850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.281859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.281865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.281872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.281880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.281888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.281895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.281902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.281910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.281918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.281924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.281931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.281940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.281948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.281954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.281963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.281971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.281980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.281986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.281993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72136 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.781 [2024-12-13 12:38:03.282443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.781 [2024-12-13 12:38:03.282450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72152 len:8 PRP1 0x0 PRP2 0x0 00:32:46.781 [2024-12-13 12:38:03.282458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-13 12:38:03.282467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72168 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72192 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72200 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72208 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72216 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72224 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72232 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72240 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72248 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72256 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72264 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72272 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72280 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.282975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72288 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.282983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.282991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.282998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.283005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72296 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.283013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.283021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.290547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.290560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72304 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.290574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.290587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.290597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.290606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72312 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.290618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.290630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.290638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.290648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72320 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.290660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.290671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.290681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.290690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72328 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.290702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.290717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.290725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.290735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72336 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.290746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.290758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.782 [2024-12-13 12:38:03.290767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.782 [2024-12-13 12:38:03.290777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71824 len:8 PRP1 0x0 PRP2 0x0 00:32:46.782 [2024-12-13 12:38:03.290794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-13 12:38:03.290807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.290816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.290826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71832 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.290837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.290849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.290858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.290868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72344 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.290879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.290891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.290900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.290909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72352 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.290920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.290932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.290941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.290950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72360 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.290961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.290973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.290982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.290992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72368 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72376 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72384 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72392 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72400 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72408 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72424 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72432 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72448 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-13 12:38:03.291526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-13 12:38:03.291535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-13 12:38:03.291547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:03.291605] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:46.783 [2024-12-13 12:38:03.291620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:46.783 [2024-12-13 12:38:03.291668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a73a0 (9): Bad file descriptor 00:32:46.783 [2024-12-13 12:38:03.296857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:46.783 [2024-12-13 12:38:03.325422] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:46.783 10865.00 IOPS, 42.44 MiB/s [2024-12-13T11:38:14.483Z] 10965.33 IOPS, 42.83 MiB/s [2024-12-13T11:38:14.483Z] 11023.57 IOPS, 43.06 MiB/s [2024-12-13T11:38:14.483Z] 11051.12 IOPS, 43.17 MiB/s [2024-12-13T11:38:14.483Z] 11103.00 IOPS, 43.37 MiB/s [2024-12-13T11:38:14.483Z] [2024-12-13 12:38:07.694571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.783 [2024-12-13 12:38:07.694608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:07.694622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.783 [2024-12-13 12:38:07.694630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:07.694639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.783 [2024-12-13 12:38:07.694649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-13 12:38:07.694658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.783 [2024-12-13 12:38:07.694665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.694995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.695001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.695032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.695046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-13 12:38:07.695060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-13 12:38:07.695258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.784 [2024-12-13 12:38:07.695264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-13 12:38:07.695724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.785 [2024-12-13 12:38:07.695732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-13 12:38:07.695748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86192 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86208 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86216 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.695979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.695985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.695992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.695997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:32:46.786 [2024-12-13 12:38:07.696312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-13 12:38:07.696318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.786 [2024-12-13 12:38:07.696323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.786 [2024-12-13 12:38:07.696328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.696602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.696608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.696614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.696619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.706847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.706858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.706867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.706873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.706879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.706886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.706893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.706901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.706907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.706913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.706920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.706925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.706931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.706938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.706945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.706950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.706956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.706962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.706970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.706975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.706981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.706987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.706994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.706999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.707005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.707012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.707019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.707024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.707029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.707036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.707042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.707047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.707053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.707060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.707067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.707072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.707078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.707084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.707092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-13 12:38:07.707098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-13 12:38:07.707104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-13 12:38:07.707110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-13 12:38:07.707117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-13 12:38:07.707122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-13 12:38:07.707127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-13 12:38:07.707134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-13 12:38:07.707141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-13 12:38:07.707146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-13 12:38:07.707152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85792 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-13 12:38:07.707159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-13 12:38:07.707203] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:46.788 [2024-12-13 12:38:07.707227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.788 [2024-12-13 12:38:07.707236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-13 12:38:07.707244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.788 [2024-12-13 12:38:07.707251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-13 12:38:07.707258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.788 [2024-12-13 12:38:07.707265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-13 12:38:07.707273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.788 [2024-12-13 12:38:07.707280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-13 12:38:07.707287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:46.788 [2024-12-13 12:38:07.707318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a73a0 (9): Bad file descriptor 00:32:46.788 [2024-12-13 12:38:07.710335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:46.788 [2024-12-13 12:38:07.732719] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:46.788 11079.20 IOPS, 43.28 MiB/s [2024-12-13T11:38:14.488Z] 11095.00 IOPS, 43.34 MiB/s [2024-12-13T11:38:14.488Z] 11117.33 IOPS, 43.43 MiB/s [2024-12-13T11:38:14.488Z] 11137.62 IOPS, 43.51 MiB/s [2024-12-13T11:38:14.488Z] 11160.36 IOPS, 43.60 MiB/s 00:32:46.788 Latency(us) 00:32:46.788 [2024-12-13T11:38:14.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.788 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:46.788 Verification LBA range: start 0x0 length 0x4000 00:32:46.788 NVMe0n1 : 15.01 11175.29 43.65 637.38 0.00 10813.93 417.40 31207.62 00:32:46.788 [2024-12-13T11:38:14.488Z] =================================================================================================================== 00:32:46.788 [2024-12-13T11:38:14.488Z] Total : 11175.29 43.65 637.38 0.00 10813.93 417.40 31207.62 00:32:46.788 Received shutdown signal, test time was about 15.000000 seconds 00:32:46.788 00:32:46.788 Latency(us) 00:32:46.788 [2024-12-13T11:38:14.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.788 [2024-12-13T11:38:14.488Z] =================================================================================================================== 00:32:46.788 [2024-12-13T11:38:14.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=482287 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 482287 /var/tmp/bdevperf.sock 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 482287 ']' 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:46.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.788 12:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:46.788 12:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.788 12:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:46.788 12:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:46.788 [2024-12-13 12:38:14.302708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:46.788 12:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:47.046 [2024-12-13 12:38:14.491275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:47.046 12:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.304 NVMe0n1 00:32:47.304 12:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.561 00:32:47.561 12:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:48.127 00:32:48.127 12:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:48.127 12:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:48.127 12:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.385 12:38:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:51.667 12:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:51.667 12:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:51.667 12:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=482990 00:32:51.667 12:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:51.667 12:38:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 482990 00:32:52.600 { 00:32:52.600 "results": [ 00:32:52.600 { 00:32:52.600 "job": "NVMe0n1", 00:32:52.600 "core_mask": "0x1", 00:32:52.600 "workload": "verify", 00:32:52.600 "status": "finished", 00:32:52.600 "verify_range": { 00:32:52.600 "start": 0, 00:32:52.600 "length": 16384 00:32:52.600 }, 00:32:52.600 "queue_depth": 128, 00:32:52.600 "io_size": 4096, 00:32:52.600 "runtime": 1.009898, 00:32:52.600 "iops": 11440.75936381694, 00:32:52.600 "mibps": 44.69046626490992, 00:32:52.600 "io_failed": 0, 00:32:52.600 "io_timeout": 0, 00:32:52.600 "avg_latency_us": 11143.27811732898, 00:32:52.601 "min_latency_us": 2262.552380952381, 00:32:52.601 "max_latency_us": 13856.182857142858 00:32:52.601 } 00:32:52.601 ], 00:32:52.601 "core_count": 1 00:32:52.601 } 00:32:52.601 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:52.601 [2024-12-13 12:38:13.951647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:52.601 [2024-12-13 12:38:13.951699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482287 ] 00:32:52.601 [2024-12-13 12:38:14.025096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.601 [2024-12-13 12:38:14.044843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.601 [2024-12-13 12:38:15.897484] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:52.601 [2024-12-13 12:38:15.897529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.601 [2024-12-13 12:38:15.897540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.601 [2024-12-13 12:38:15.897549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.601 [2024-12-13 12:38:15.897555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.601 [2024-12-13 12:38:15.897563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.601 [2024-12-13 12:38:15.897570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.601 [2024-12-13 12:38:15.897577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.601 [2024-12-13 12:38:15.897584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.601 [2024-12-13 12:38:15.897590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:52.601 [2024-12-13 12:38:15.897615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:52.601 [2024-12-13 12:38:15.897629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82e3a0 (9): Bad file descriptor 00:32:52.601 [2024-12-13 12:38:15.989945] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:52.601 Running I/O for 1 seconds... 00:32:52.601 11425.00 IOPS, 44.63 MiB/s 00:32:52.601 Latency(us) 00:32:52.601 [2024-12-13T11:38:20.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.601 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:52.601 Verification LBA range: start 0x0 length 0x4000 00:32:52.601 NVMe0n1 : 1.01 11440.76 44.69 0.00 0.00 11143.28 2262.55 13856.18 00:32:52.601 [2024-12-13T11:38:20.301Z] =================================================================================================================== 00:32:52.601 [2024-12-13T11:38:20.301Z] Total : 11440.76 44.69 0.00 0.00 11143.28 2262.55 13856.18 00:32:52.601 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:52.601 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:52.858 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:53.116 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.116 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:53.374 12:38:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:53.632 12:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 482287 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 482287 ']' 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 482287 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482287 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482287' 00:32:56.914 killing process with pid 482287 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 482287 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 482287 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:56.914 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.173 rmmod nvme_tcp 00:32:57.173 rmmod nvme_fabrics 00:32:57.173 rmmod nvme_keyring 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 479376 ']' 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 479376 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479376 ']' 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479376 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479376 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479376' 00:32:57.173 killing process with pid 479376 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479376 00:32:57.173 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479376 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.432 12:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.968 00:32:59.968 real 0m37.022s 00:32:59.968 user 1m57.332s 00:32:59.968 sys 0m7.855s 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:59.968 ************************************ 00:32:59.968 END TEST nvmf_failover 00:32:59.968 ************************************ 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.968 12:38:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.968 ************************************ 00:32:59.968 START TEST nvmf_host_discovery 00:32:59.968 ************************************ 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:59.969 * Looking for test storage... 00:32:59.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:59.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.969 --rc genhtml_branch_coverage=1 00:32:59.969 --rc genhtml_function_coverage=1 00:32:59.969 --rc genhtml_legend=1 00:32:59.969 --rc geninfo_all_blocks=1 00:32:59.969 --rc geninfo_unexecuted_blocks=1 00:32:59.969 00:32:59.969 ' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:59.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.969 --rc genhtml_branch_coverage=1 00:32:59.969 --rc genhtml_function_coverage=1 00:32:59.969 --rc genhtml_legend=1 00:32:59.969 --rc geninfo_all_blocks=1 00:32:59.969 --rc geninfo_unexecuted_blocks=1 00:32:59.969 00:32:59.969 ' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:59.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.969 --rc genhtml_branch_coverage=1 00:32:59.969 --rc genhtml_function_coverage=1 00:32:59.969 --rc genhtml_legend=1 00:32:59.969 --rc geninfo_all_blocks=1 00:32:59.969 --rc geninfo_unexecuted_blocks=1 00:32:59.969 00:32:59.969 ' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:59.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.969 --rc genhtml_branch_coverage=1 00:32:59.969 --rc genhtml_function_coverage=1 00:32:59.969 --rc genhtml_legend=1 00:32:59.969 --rc geninfo_all_blocks=1 00:32:59.969 --rc geninfo_unexecuted_blocks=1 00:32:59.969 00:32:59.969 ' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:59.969 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.970 12:38:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:05.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:05.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:05.244 Found net devices under 0000:af:00.0: cvl_0_0 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:05.244 Found net devices under 0000:af:00.1: cvl_0_1 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.244 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.245 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.504 12:38:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:33:05.504 00:33:05.504 --- 10.0.0.2 ping statistics --- 00:33:05.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.504 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:33:05.504 00:33:05.504 --- 10.0.0.1 ping statistics --- 00:33:05.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.504 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=487350 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 487350 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 487350 ']' 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.504 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.764 [2024-12-13 12:38:33.225853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:05.764 [2024-12-13 12:38:33.225904] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.764 [2024-12-13 12:38:33.303220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.764 [2024-12-13 12:38:33.324636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.764 [2024-12-13 12:38:33.324670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.764 [2024-12-13 12:38:33.324677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.764 [2024-12-13 12:38:33.324683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.764 [2024-12-13 12:38:33.324688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.764 [2024-12-13 12:38:33.325158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.764 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.023 [2024-12-13 12:38:33.467955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.023 [2024-12-13 12:38:33.480122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.023 null0 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.023 null1 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=487385 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 487385 /tmp/host.sock 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 487385 ']' 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:06.023 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.023 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.023 [2024-12-13 12:38:33.554641] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:06.023 [2024-12-13 12:38:33.554681] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487385 ] 00:33:06.023 [2024-12-13 12:38:33.626520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.023 [2024-12-13 12:38:33.649112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.282 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 12:38:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 [2024-12-13 12:38:34.045590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.541 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.800 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:06.800 12:38:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:07.366 [2024-12-13 12:38:34.806965] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:07.366 [2024-12-13 12:38:34.806986] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:07.366 [2024-12-13 12:38:34.807000] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:07.366 [2024-12-13 12:38:34.895261] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:07.623 [2024-12-13 12:38:35.076207] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:07.623 [2024-12-13 12:38:35.076949] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a2df60:1 started. 00:33:07.623 [2024-12-13 12:38:35.078299] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:07.623 [2024-12-13 12:38:35.078315] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:07.623 [2024-12-13 12:38:35.084605] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a2df60 was disconnected and freed. delete nvme_qpair. 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.623 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:07.881 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.882 [2024-12-13 12:38:35.458537] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a18020:1 started. 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.882 [2024-12-13 12:38:35.507291] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a18020 was disconnected and freed. delete nvme_qpair. 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.882 [2024-12-13 12:38:35.561660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:07.882 [2024-12-13 12:38:35.561875] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:07.882 [2024-12-13 12:38:35.561894] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.882 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:08.140 [2024-12-13 12:38:35.648138] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:08.140 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:08.141 12:38:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:08.399 [2024-12-13 12:38:35.953461] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:08.399 [2024-12-13 12:38:35.953493] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:08.399 [2024-12-13 12:38:35.953501] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:08.399 [2024-12-13 12:38:35.953505] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 [2024-12-13 12:38:36.821820] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:09.335 [2024-12-13 12:38:36.821841] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:09.335 [2024-12-13 12:38:36.830625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.335 [2024-12-13 12:38:36.830643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.335 [2024-12-13 12:38:36.830651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.335 [2024-12-13 12:38:36.830659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.335 [2024-12-13 12:38:36.830666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.335 [2024-12-13 12:38:36.830673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.335 [2024-12-13 12:38:36.830680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.335 [2024-12-13 12:38:36.830687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.335 [2024-12-13 12:38:36.830694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:09.335 [2024-12-13 12:38:36.840639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.335 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.335 [2024-12-13 12:38:36.850675] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.335 [2024-12-13 12:38:36.850686] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.335 [2024-12-13 12:38:36.850692] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.335 [2024-12-13 12:38:36.850699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.335 [2024-12-13 12:38:36.850715] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.335 [2024-12-13 12:38:36.850906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.335 [2024-12-13 12:38:36.850921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.335 [2024-12-13 12:38:36.850929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.335 [2024-12-13 12:38:36.850940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.335 [2024-12-13 12:38:36.850958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.335 [2024-12-13 12:38:36.850965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.335 [2024-12-13 12:38:36.850973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.335 [2024-12-13 12:38:36.850979] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.336 [2024-12-13 12:38:36.850985] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.336 [2024-12-13 12:38:36.850989] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.336 [2024-12-13 12:38:36.860745] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.336 [2024-12-13 12:38:36.860755] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.336 [2024-12-13 12:38:36.860759] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.860763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.336 [2024-12-13 12:38:36.860776] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.861032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.336 [2024-12-13 12:38:36.861045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.336 [2024-12-13 12:38:36.861053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.336 [2024-12-13 12:38:36.861063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.336 [2024-12-13 12:38:36.861078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.336 [2024-12-13 12:38:36.861085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.336 [2024-12-13 12:38:36.861091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.336 [2024-12-13 12:38:36.861097] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.336 [2024-12-13 12:38:36.861101] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.336 [2024-12-13 12:38:36.861105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.336 [2024-12-13 12:38:36.870807] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.336 [2024-12-13 12:38:36.870820] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.336 [2024-12-13 12:38:36.870828] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.870831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.336 [2024-12-13 12:38:36.870845] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.871000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.336 [2024-12-13 12:38:36.871012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.336 [2024-12-13 12:38:36.871019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.336 [2024-12-13 12:38:36.871030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.336 [2024-12-13 12:38:36.871039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.336 [2024-12-13 12:38:36.871044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.336 [2024-12-13 12:38:36.871051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.336 [2024-12-13 12:38:36.871056] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.336 [2024-12-13 12:38:36.871060] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.336 [2024-12-13 12:38:36.871064] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:09.336 [2024-12-13 12:38:36.880876] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.336 [2024-12-13 12:38:36.880889] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.336 [2024-12-13 12:38:36.880893] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.880897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.336 [2024-12-13 12:38:36.880910] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.881059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.336 [2024-12-13 12:38:36.881071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.336 [2024-12-13 12:38:36.881078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.336 [2024-12-13 12:38:36.881088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.336 [2024-12-13 12:38:36.881101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.336 [2024-12-13 12:38:36.881107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.336 [2024-12-13 12:38:36.881113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.336 [2024-12-13 12:38:36.881119] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.336 [2024-12-13 12:38:36.881123] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.336 [2024-12-13 12:38:36.881127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.336 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.336 [2024-12-13 12:38:36.890940] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.336 [2024-12-13 12:38:36.890955] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.336 [2024-12-13 12:38:36.890959] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.890963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.336 [2024-12-13 12:38:36.890976] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.891075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.336 [2024-12-13 12:38:36.891086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.336 [2024-12-13 12:38:36.891093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.336 [2024-12-13 12:38:36.891103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.336 [2024-12-13 12:38:36.891112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.336 [2024-12-13 12:38:36.891118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.336 [2024-12-13 12:38:36.891124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.336 [2024-12-13 12:38:36.891130] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.336 [2024-12-13 12:38:36.891134] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.336 [2024-12-13 12:38:36.891138] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.336 [2024-12-13 12:38:36.901008] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.336 [2024-12-13 12:38:36.901018] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.336 [2024-12-13 12:38:36.901022] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.901026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.336 [2024-12-13 12:38:36.901045] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.336 [2024-12-13 12:38:36.901197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.336 [2024-12-13 12:38:36.901208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.336 [2024-12-13 12:38:36.901216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.336 [2024-12-13 12:38:36.901226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.336 [2024-12-13 12:38:36.901235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.336 [2024-12-13 12:38:36.901241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.336 [2024-12-13 12:38:36.901247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.336 [2024-12-13 12:38:36.901253] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.336 [2024-12-13 12:38:36.901257] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.336 [2024-12-13 12:38:36.901261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.336 [2024-12-13 12:38:36.911076] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.336 [2024-12-13 12:38:36.911088] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.337 [2024-12-13 12:38:36.911092] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.911096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.337 [2024-12-13 12:38:36.911109] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.911271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.337 [2024-12-13 12:38:36.911283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.337 [2024-12-13 12:38:36.911291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.337 [2024-12-13 12:38:36.911301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.337 [2024-12-13 12:38:36.911310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.337 [2024-12-13 12:38:36.911316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.337 [2024-12-13 12:38:36.911322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.337 [2024-12-13 12:38:36.911328] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.337 [2024-12-13 12:38:36.911332] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.337 [2024-12-13 12:38:36.911336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.337 [2024-12-13 12:38:36.921139] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.337 [2024-12-13 12:38:36.921149] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.337 [2024-12-13 12:38:36.921152] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.921159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.337 [2024-12-13 12:38:36.921171] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.921322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.337 [2024-12-13 12:38:36.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.337 [2024-12-13 12:38:36.921338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.337 [2024-12-13 12:38:36.921347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.337 [2024-12-13 12:38:36.921356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.337 [2024-12-13 12:38:36.921362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.337 [2024-12-13 12:38:36.921368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.337 [2024-12-13 12:38:36.921373] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.337 [2024-12-13 12:38:36.921377] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.337 [2024-12-13 12:38:36.921381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:09.337 [2024-12-13 12:38:36.931202] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.337 [2024-12-13 12:38:36.931214] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.337 [2024-12-13 12:38:36.931218] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.931221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.337 [2024-12-13 12:38:36.931234] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.931466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.337 [2024-12-13 12:38:36.931478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.337 [2024-12-13 12:38:36.931485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.337 [2024-12-13 12:38:36.931494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.337 [2024-12-13 12:38:36.931503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.337 [2024-12-13 12:38:36.931513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.337 [2024-12-13 12:38:36.931519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.337 [2024-12-13 12:38:36.931525] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.337 [2024-12-13 12:38:36.931529] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.337 [2024-12-13 12:38:36.931533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.337 [2024-12-13 12:38:36.941264] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.337 [2024-12-13 12:38:36.941286] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.337 [2024-12-13 12:38:36.941291] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.941294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.337 [2024-12-13 12:38:36.941308] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.337 [2024-12-13 12:38:36.941466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.337 [2024-12-13 12:38:36.941477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ffef0 with addr=10.0.0.2, port=4420 00:33:09.337 [2024-12-13 12:38:36.941484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ffef0 is same with the state(6) to be set 00:33:09.337 [2024-12-13 12:38:36.941494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ffef0 (9): Bad file descriptor 00:33:09.337 [2024-12-13 12:38:36.941504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.337 [2024-12-13 12:38:36.941510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.337 [2024-12-13 12:38:36.941517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.337 [2024-12-13 12:38:36.941522] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.337 [2024-12-13 12:38:36.941527] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.337 [2024-12-13 12:38:36.941531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.337 [2024-12-13 12:38:36.949812] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:09.337 [2024-12-13 12:38:36.949826] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:09.337 12:38:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:10.713 12:38:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.713 12:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.648 [2024-12-13 12:38:39.256477] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:11.648 [2024-12-13 12:38:39.256494] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:11.648 [2024-12-13 12:38:39.256504] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:11.648 [2024-12-13 12:38:39.342759] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:12.215 [2024-12-13 12:38:39.645078] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:12.215 [2024-12-13 12:38:39.645581] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x19fb8b0:1 started. 00:33:12.215 [2024-12-13 12:38:39.647123] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:12.215 [2024-12-13 12:38:39.647147] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.215 [2024-12-13 12:38:39.656289] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x19fb8b0 was disconnected and freed. delete nvme_qpair. 00:33:12.215 request: 00:33:12.215 { 00:33:12.215 "name": "nvme", 00:33:12.215 "trtype": "tcp", 00:33:12.215 "traddr": "10.0.0.2", 00:33:12.215 "adrfam": "ipv4", 00:33:12.215 "trsvcid": "8009", 00:33:12.215 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.215 "wait_for_attach": true, 00:33:12.215 "method": "bdev_nvme_start_discovery", 00:33:12.215 "req_id": 1 00:33:12.215 } 00:33:12.215 Got JSON-RPC error response 00:33:12.215 response: 00:33:12.215 { 00:33:12.215 "code": -17, 00:33:12.215 "message": "File exists" 00:33:12.215 } 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.215 request: 00:33:12.215 { 00:33:12.215 "name": "nvme_second", 00:33:12.215 "trtype": "tcp", 00:33:12.215 "traddr": "10.0.0.2", 00:33:12.215 "adrfam": "ipv4", 00:33:12.215 "trsvcid": "8009", 00:33:12.215 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.215 "wait_for_attach": true, 00:33:12.215 "method": "bdev_nvme_start_discovery", 00:33:12.215 "req_id": 1 00:33:12.215 } 00:33:12.215 Got JSON-RPC error response 00:33:12.215 response: 00:33:12.215 { 00:33:12.215 "code": -17, 00:33:12.215 "message": "File exists" 00:33:12.215 } 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.215 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.216 12:38:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.590 [2024-12-13 12:38:40.891926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.590 [2024-12-13 12:38:40.891954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a363d0 with addr=10.0.0.2, port=8010 00:33:13.590 [2024-12-13 12:38:40.891972] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:13.590 [2024-12-13 12:38:40.891979] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:13.590 [2024-12-13 12:38:40.891985] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:14.523 [2024-12-13 12:38:41.894427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-12-13 12:38:41.894451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a363d0 with addr=10.0.0.2, port=8010 00:33:14.523 [2024-12-13 12:38:41.894463] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:14.523 [2024-12-13 12:38:41.894469] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:14.523 [2024-12-13 12:38:41.894475] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:15.458 [2024-12-13 12:38:42.896611] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:15.458 request: 00:33:15.458 { 00:33:15.458 "name": "nvme_second", 00:33:15.458 "trtype": "tcp", 00:33:15.458 "traddr": "10.0.0.2", 00:33:15.458 "adrfam": "ipv4", 00:33:15.458 "trsvcid": "8010", 00:33:15.458 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:15.458 "wait_for_attach": false, 00:33:15.458 "attach_timeout_ms": 3000, 00:33:15.458 "method": "bdev_nvme_start_discovery", 00:33:15.458 "req_id": 1 00:33:15.458 } 00:33:15.458 Got JSON-RPC error response 00:33:15.458 response: 00:33:15.458 { 00:33:15.458 "code": -110, 00:33:15.458 "message": "Connection timed out" 00:33:15.458 } 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 487385 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.458 rmmod nvme_tcp 00:33:15.458 rmmod nvme_fabrics 00:33:15.458 rmmod nvme_keyring 00:33:15.458 12:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 487350 ']' 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 487350 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 487350 ']' 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 487350 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487350 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487350' 00:33:15.458 killing process with pid 487350 00:33:15.458 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 487350 00:33:15.459 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 487350 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.787 12:38:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.793 00:33:17.793 real 0m18.158s 00:33:17.793 user 0m22.603s 00:33:17.793 sys 0m5.822s 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.793 ************************************ 00:33:17.793 END TEST nvmf_host_discovery 00:33:17.793 ************************************ 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.793 ************************************ 00:33:17.793 START TEST nvmf_host_multipath_status 00:33:17.793 ************************************ 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:17.793 * Looking for test storage... 00:33:17.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:17.793 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:18.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.067 --rc genhtml_branch_coverage=1 00:33:18.067 --rc genhtml_function_coverage=1 00:33:18.067 --rc genhtml_legend=1 00:33:18.067 --rc geninfo_all_blocks=1 00:33:18.067 --rc geninfo_unexecuted_blocks=1 00:33:18.067 00:33:18.067 ' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:18.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.067 --rc genhtml_branch_coverage=1 00:33:18.067 --rc genhtml_function_coverage=1 00:33:18.067 --rc genhtml_legend=1 00:33:18.067 --rc geninfo_all_blocks=1 00:33:18.067 --rc geninfo_unexecuted_blocks=1 00:33:18.067 00:33:18.067 ' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:18.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.067 --rc genhtml_branch_coverage=1 00:33:18.067 --rc genhtml_function_coverage=1 00:33:18.067 --rc genhtml_legend=1 00:33:18.067 --rc geninfo_all_blocks=1 00:33:18.067 --rc geninfo_unexecuted_blocks=1 00:33:18.067 00:33:18.067 ' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:18.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.067 --rc genhtml_branch_coverage=1 00:33:18.067 --rc genhtml_function_coverage=1 00:33:18.067 --rc genhtml_legend=1 00:33:18.067 --rc geninfo_all_blocks=1 00:33:18.067 --rc geninfo_unexecuted_blocks=1 00:33:18.067 00:33:18.067 ' 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.067 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.068 12:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:23.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:23.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:23.494 Found net devices under 0000:af:00.0: cvl_0_0 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:23.494 Found net devices under 0000:af:00.1: cvl_0_1 00:33:23.494 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.495 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:33:23.776 00:33:23.776 --- 10.0.0.2 ping statistics --- 00:33:23.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.776 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:23.776 00:33:23.776 --- 10.0.0.1 ping statistics --- 00:33:23.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.776 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=492587 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 492587 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 492587 ']' 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.776 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:24.100 [2024-12-13 12:38:51.506170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:24.100 [2024-12-13 12:38:51.506223] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.100 [2024-12-13 12:38:51.583032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:24.100 [2024-12-13 12:38:51.606081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.100 [2024-12-13 12:38:51.606113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.100 [2024-12-13 12:38:51.606121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.100 [2024-12-13 12:38:51.606127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.100 [2024-12-13 12:38:51.606132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.100 [2024-12-13 12:38:51.607163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.100 [2024-12-13 12:38:51.607165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.100 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=492587 00:33:24.101 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:24.359 [2024-12-13 12:38:51.914979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.359 12:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:24.618 Malloc0 00:33:24.618 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:24.877 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.136 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.136 [2024-12-13 12:38:52.753118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.136 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:25.394 [2024-12-13 12:38:52.945615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:25.394 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:25.394 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=492845 00:33:25.394 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:25.394 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 492845 /var/tmp/bdevperf.sock 00:33:25.395 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 492845 ']' 00:33:25.395 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:25.395 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.395 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:25.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:25.395 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.395 12:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.653 12:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.653 12:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:25.653 12:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:25.911 12:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:26.170 Nvme0n1 00:33:26.170 12:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:26.428 Nvme0n1 00:33:26.428 12:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:26.428 12:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:28.959 12:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:28.959 12:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:28.959 12:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.959 12:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:29.894 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:29.894 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:29.894 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.894 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:30.152 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.152 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:30.152 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.152 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:30.410 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:30.410 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:30.410 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.411 12:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.669 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.927 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.927 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.927 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.927 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:31.186 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.186 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:31.186 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:31.444 12:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:31.702 12:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:32.637 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:32.637 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:32.637 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.637 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.896 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.896 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.896 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.896 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.154 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.412 12:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.412 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.412 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.412 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.412 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.671 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.671 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.671 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.671 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.929 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.929 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:33.929 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.188 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:34.188 12:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:35.564 12:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:35.564 12:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:35.564 12:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.564 12:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.564 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:35.822 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.822 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:35.822 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.822 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.081 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.081 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.081 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.081 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.339 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.339 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.339 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.339 12:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:36.598 12:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.598 12:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:36.598 12:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:36.856 12:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:36.856 12:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.230 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:38.488 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.488 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:38.488 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.488 12:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:38.488 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.488 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:38.488 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:38.488 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.747 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.747 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:38.747 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.747 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:39.005 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.005 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:39.005 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.005 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.263 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.263 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:39.263 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:39.263 12:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:39.521 12:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:40.456 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:40.456 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:40.456 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.456 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:40.714 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:40.714 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:40.714 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.714 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:40.972 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:40.972 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:40.972 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.972 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.230 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.230 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.230 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.231 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.231 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.231 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:41.231 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.489 12:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.489 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.489 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:41.489 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.489 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:41.747 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.747 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:41.747 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:42.006 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:42.264 12:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:43.201 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:43.201 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:43.201 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.201 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:43.460 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.460 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:43.460 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.460 12:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.460 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.460 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.460 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.460 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.718 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.718 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.718 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.718 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.977 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.977 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:43.977 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.977 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.235 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.236 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:44.236 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.236 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:44.494 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.494 12:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:44.494 12:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:44.494 12:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:44.752 12:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:45.010 12:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:45.945 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:45.946 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:45.946 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.946 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:46.204 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.204 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:46.204 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.204 12:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:46.463 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.463 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:46.463 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.463 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.721 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.721 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.721 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.721 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.980 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:47.238 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.238 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:47.238 12:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:47.497 12:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:47.754 12:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:48.688 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:48.688 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:48.688 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.688 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.947 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.947 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:48.947 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.947 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:49.206 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.206 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:49.206 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.206 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:49.464 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.464 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:49.464 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.464 12:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.465 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.465 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:49.465 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.465 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.723 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.723 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:49.723 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:49.723 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.981 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.981 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:49.981 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:50.240 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:50.498 12:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:51.433 12:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:51.433 12:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:51.433 12:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.433 12:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:51.692 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.692 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:51.692 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.692 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.950 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:52.208 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.208 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:52.208 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.208 12:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:52.466 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.466 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:52.466 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.466 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:52.725 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.725 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:52.725 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:52.983 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:53.242 12:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:54.177 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:54.177 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:54.177 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.177 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:54.435 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.435 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:54.435 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.435 12:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:54.436 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.436 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:54.436 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:54.436 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.694 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.694 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:54.694 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.694 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:54.953 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.953 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:54.953 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.953 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:55.212 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.212 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:55.212 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.212 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 492845 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 492845 ']' 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 492845 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.470 12:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492845 00:33:55.470 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:55.470 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:55.470 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492845' 00:33:55.470 killing process with pid 492845 00:33:55.470 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 492845 00:33:55.470 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 492845 00:33:55.470 { 00:33:55.470 "results": [ 00:33:55.470 { 00:33:55.470 "job": "Nvme0n1", 00:33:55.470 "core_mask": "0x4", 00:33:55.470 "workload": "verify", 00:33:55.470 "status": "terminated", 00:33:55.470 "verify_range": { 00:33:55.470 "start": 0, 00:33:55.470 "length": 16384 00:33:55.470 }, 00:33:55.470 "queue_depth": 128, 00:33:55.470 "io_size": 4096, 00:33:55.470 "runtime": 28.804175, 00:33:55.470 "iops": 10725.59793849329, 00:33:55.470 "mibps": 41.89686694723942, 00:33:55.470 "io_failed": 0, 00:33:55.470 "io_timeout": 0, 00:33:55.470 "avg_latency_us": 11913.555852524021, 00:33:55.470 "min_latency_us": 390.0952380952381, 00:33:55.470 "max_latency_us": 3019898.88 00:33:55.470 } 00:33:55.470 ], 00:33:55.470 "core_count": 1 00:33:55.470 } 00:33:55.744 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 492845 00:33:55.744 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:55.744 [2024-12-13 12:38:53.008437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:55.744 [2024-12-13 12:38:53.008487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492845 ] 00:33:55.744 [2024-12-13 12:38:53.078650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.744 [2024-12-13 12:38:53.101173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.744 Running I/O for 90 seconds... 00:33:55.744 11497.00 IOPS, 44.91 MiB/s [2024-12-13T11:39:23.444Z] 11561.00 IOPS, 45.16 MiB/s [2024-12-13T11:39:23.444Z] 11595.33 IOPS, 45.29 MiB/s [2024-12-13T11:39:23.444Z] 11582.50 IOPS, 45.24 MiB/s [2024-12-13T11:39:23.444Z] 11580.60 IOPS, 45.24 MiB/s [2024-12-13T11:39:23.444Z] 11569.83 IOPS, 45.19 MiB/s [2024-12-13T11:39:23.444Z] 11563.86 IOPS, 45.17 MiB/s [2024-12-13T11:39:23.444Z] 11561.00 IOPS, 45.16 MiB/s [2024-12-13T11:39:23.444Z] 11577.56 IOPS, 45.22 MiB/s [2024-12-13T11:39:23.444Z] 11589.00 IOPS, 45.27 MiB/s [2024-12-13T11:39:23.444Z] 11578.09 IOPS, 45.23 MiB/s [2024-12-13T11:39:23.444Z] 11583.67 IOPS, 45.25 MiB/s [2024-12-13T11:39:23.444Z] [2024-12-13 12:39:06.908385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.908986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.908994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.909006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.909013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.909025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.909033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.909045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.909053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.909065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.744 [2024-12-13 12:39:06.909072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.744 [2024-12-13 12:39:06.909084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.909242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.909954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.909979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.909995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.910002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.910025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.910053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.910078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.745 [2024-12-13 12:39:06.910101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.745 [2024-12-13 12:39:06.910620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.745 [2024-12-13 12:39:06.910635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.910984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.910990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.746 [2024-12-13 12:39:06.911441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.746 [2024-12-13 12:39:06.911604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.746 [2024-12-13 12:39:06.911610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.746 11328.23 IOPS, 44.25 MiB/s [2024-12-13T11:39:23.446Z] 10519.07 IOPS, 41.09 MiB/s [2024-12-13T11:39:23.446Z] 9817.80 IOPS, 38.35 MiB/s [2024-12-13T11:39:23.446Z] 9400.69 IOPS, 36.72 MiB/s [2024-12-13T11:39:23.446Z] 9532.12 IOPS, 37.23 MiB/s [2024-12-13T11:39:23.446Z] 9641.44 IOPS, 37.66 MiB/s [2024-12-13T11:39:23.446Z] 9824.63 IOPS, 38.38 MiB/s [2024-12-13T11:39:23.446Z] 10016.20 IOPS, 39.13 MiB/s [2024-12-13T11:39:23.447Z] 10190.71 IOPS, 39.81 MiB/s [2024-12-13T11:39:23.447Z] 10259.86 IOPS, 40.08 MiB/s [2024-12-13T11:39:23.447Z] 10310.04 IOPS, 40.27 MiB/s [2024-12-13T11:39:23.447Z] 10377.00 IOPS, 40.54 MiB/s [2024-12-13T11:39:23.447Z] 10500.64 IOPS, 41.02 MiB/s [2024-12-13T11:39:23.447Z] 10616.08 IOPS, 41.47 MiB/s [2024-12-13T11:39:23.447Z] [2024-12-13 12:39:20.674591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.674986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.674993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.747 [2024-12-13 12:39:20.675901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.747 [2024-12-13 12:39:20.675914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.747 [2024-12-13 12:39:20.675920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.675932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.675939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.675951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.675958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.677497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.677516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.677535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.677554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.677574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.677589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.677596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.678977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.678989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.678995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.748 [2024-12-13 12:39:20.679208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.679227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.679239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.680044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.680061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.748 [2024-12-13 12:39:20.680076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.748 [2024-12-13 12:39:20.680084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.680953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.680966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.680973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.682694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.682701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.749 [2024-12-13 12:39:20.684475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.749 [2024-12-13 12:39:20.684579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.749 [2024-12-13 12:39:20.684586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.684599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.684617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.684624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.684966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.684980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.684995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.685187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.685310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.685317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.686862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.686902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.686914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.696127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.696153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.696174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.696194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.696215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.696235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.696255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.696275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.696298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.696318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.696331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.696339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.697860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.697883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.697905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.697927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.697948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.697969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.697982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.697990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.698003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.698010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.698024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.698031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.698966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.698981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.699007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.699028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.699048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.750 [2024-12-13 12:39:20.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.699089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.699110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.750 [2024-12-13 12:39:20.699130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.750 [2024-12-13 12:39:20.699143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.699609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.699643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.699650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.700922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.700983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.700996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.701975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.701988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.701995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.702008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.702015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.702029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.702036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.703322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.703349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.703374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.703399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.703423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.703447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.703472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.703496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.751 [2024-12-13 12:39:20.703520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.751 [2024-12-13 12:39:20.703544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.751 [2024-12-13 12:39:20.703559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.703946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.703985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.703994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.704091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.704115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.704139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.704212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.704238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.704278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.704287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.707957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.707982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.707998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.708007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.708031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.708058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.752 [2024-12-13 12:39:20.708082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.708106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.708131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.708155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.752 [2024-12-13 12:39:20.708179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.752 [2024-12-13 12:39:20.708956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.708975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.708993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.709002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.709324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.709349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.709373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.709397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.709413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.709424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.710754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.710779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.710811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.710933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.710957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.710973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.710985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.711746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.711791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.711800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.713255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.713356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.713375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.713394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.753 [2024-12-13 12:39:20.713412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.713462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.753 [2024-12-13 12:39:20.713469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.753 [2024-12-13 12:39:20.714662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.714679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.714761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.714843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.714862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.714977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.714989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.714996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.715485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.715522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.716142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.716163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.716182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.716354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.716373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.716391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.716404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.716410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.754 [2024-12-13 12:39:20.717881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.754 [2024-12-13 12:39:20.717900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.754 [2024-12-13 12:39:20.717911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.717918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.717930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.717937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.717949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.717956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.717968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.717975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.717987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.717993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.718407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.718420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.718426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.719950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.719981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.719987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.720175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.720871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.720930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.720949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.720967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.720986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.720998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.755 [2024-12-13 12:39:20.721177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.721196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.755 [2024-12-13 12:39:20.721208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.755 [2024-12-13 12:39:20.721215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.721227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.721234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.722532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.722986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.722993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.723005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.723012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.723024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.723031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.723043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.723050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.723062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.723069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.723081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.723088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.724709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.724722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.724728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.725426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.725448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.756 [2024-12-13 12:39:20.725467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.756 [2024-12-13 12:39:20.725591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.756 [2024-12-13 12:39:20.725598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.725610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.725616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.725628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.725635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.725647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.725657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.725669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.725676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.725688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.725695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.726973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.726984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.726991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.727003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.727010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.727022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.727029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.727459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.727472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.727486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.727493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.727505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.727512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.728579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.728592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.728598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.757 [2024-12-13 12:39:20.729849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.757 [2024-12-13 12:39:20.729901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.757 [2024-12-13 12:39:20.729907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.757 10681.67 IOPS, 41.73 MiB/s [2024-12-13T11:39:23.457Z] 10705.43 IOPS, 41.82 MiB/s [2024-12-13T11:39:23.457Z] Received shutdown signal, test time was about 28.804808 seconds 00:33:55.757 00:33:55.757 Latency(us) 00:33:55.757 [2024-12-13T11:39:23.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.757 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:55.757 Verification LBA range: start 0x0 length 0x4000 00:33:55.757 Nvme0n1 : 28.80 10725.60 41.90 0.00 0.00 11913.56 390.10 3019898.88 00:33:55.757 [2024-12-13T11:39:23.457Z] =================================================================================================================== 00:33:55.757 [2024-12-13T11:39:23.457Z] Total : 10725.60 41.90 0.00 0.00 11913.56 390.10 3019898.88 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.758 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.758 rmmod nvme_tcp 00:33:55.758 rmmod nvme_fabrics 00:33:55.758 rmmod nvme_keyring 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 492587 ']' 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 492587 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 492587 ']' 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 492587 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492587 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492587' 00:33:56.017 killing process with pid 492587 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 492587 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 492587 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.017 12:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.553 00:33:58.553 real 0m40.391s 00:33:58.553 user 1m49.807s 00:33:58.553 sys 0m11.363s 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:58.553 ************************************ 00:33:58.553 END TEST nvmf_host_multipath_status 00:33:58.553 ************************************ 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.553 ************************************ 00:33:58.553 START TEST nvmf_discovery_remove_ifc 00:33:58.553 ************************************ 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:58.553 * Looking for test storage... 00:33:58.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:58.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.553 --rc genhtml_branch_coverage=1 00:33:58.553 --rc genhtml_function_coverage=1 00:33:58.553 --rc genhtml_legend=1 00:33:58.553 --rc geninfo_all_blocks=1 00:33:58.553 --rc geninfo_unexecuted_blocks=1 00:33:58.553 00:33:58.553 ' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:58.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.553 --rc genhtml_branch_coverage=1 00:33:58.553 --rc genhtml_function_coverage=1 00:33:58.553 --rc genhtml_legend=1 00:33:58.553 --rc geninfo_all_blocks=1 00:33:58.553 --rc geninfo_unexecuted_blocks=1 00:33:58.553 00:33:58.553 ' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:58.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.553 --rc genhtml_branch_coverage=1 00:33:58.553 --rc genhtml_function_coverage=1 00:33:58.553 --rc genhtml_legend=1 00:33:58.553 --rc geninfo_all_blocks=1 00:33:58.553 --rc geninfo_unexecuted_blocks=1 00:33:58.553 00:33:58.553 ' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:58.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.553 --rc genhtml_branch_coverage=1 00:33:58.553 --rc genhtml_function_coverage=1 00:33:58.553 --rc genhtml_legend=1 00:33:58.553 --rc geninfo_all_blocks=1 00:33:58.553 --rc geninfo_unexecuted_blocks=1 00:33:58.553 00:33:58.553 ' 00:33:58.553 12:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.553 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.554 12:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:05.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:05.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:05.126 Found net devices under 0000:af:00.0: cvl_0_0 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:05.126 Found net devices under 0000:af:00.1: cvl_0_1 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.126 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:34:05.127 00:34:05.127 --- 10.0.0.2 ping statistics --- 00:34:05.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.127 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:34:05.127 00:34:05.127 --- 10.0.0.1 ping statistics --- 00:34:05.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.127 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=501708 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 501708 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501708 ']' 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.127 12:39:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 [2024-12-13 12:39:31.949756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:05.127 [2024-12-13 12:39:31.949806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.127 [2024-12-13 12:39:32.010068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.127 [2024-12-13 12:39:32.030843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.127 [2024-12-13 12:39:32.030878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.127 [2024-12-13 12:39:32.030885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.127 [2024-12-13 12:39:32.030891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.127 [2024-12-13 12:39:32.030896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.127 [2024-12-13 12:39:32.031408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 [2024-12-13 12:39:32.178213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.127 [2024-12-13 12:39:32.186402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:05.127 null0 00:34:05.127 [2024-12-13 12:39:32.218375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=501879 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 501879 /tmp/host.sock 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501879 ']' 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:05.127 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 [2024-12-13 12:39:32.287729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:05.127 [2024-12-13 12:39:32.287772] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501879 ] 00:34:05.127 [2024-12-13 12:39:32.362915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.127 [2024-12-13 12:39:32.385552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.127 12:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.063 [2024-12-13 12:39:33.575263] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:06.063 [2024-12-13 12:39:33.575285] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:06.063 [2024-12-13 12:39:33.575298] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:06.064 [2024-12-13 12:39:33.701676] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:06.064 [2024-12-13 12:39:33.756199] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:06.064 [2024-12-13 12:39:33.756954] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd16b50:1 started. 00:34:06.064 [2024-12-13 12:39:33.758241] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:06.064 [2024-12-13 12:39:33.758281] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:06.064 [2024-12-13 12:39:33.758299] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:06.064 [2024-12-13 12:39:33.758310] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:06.064 [2024-12-13 12:39:33.758325] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:06.064 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.064 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:06.064 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.322 [2024-12-13 12:39:33.763618] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd16b50 was disconnected and freed. delete nvme_qpair. 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.322 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:06.323 12:39:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.259 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.517 12:39:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.517 12:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:07.517 12:39:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:08.453 12:39:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.389 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.647 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:09.647 12:39:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:10.583 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:10.584 12:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.523 [2024-12-13 12:39:39.199835] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:11.523 [2024-12-13 12:39:39.199869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.523 [2024-12-13 12:39:39.199895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.523 [2024-12-13 12:39:39.199905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.523 [2024-12-13 12:39:39.199912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.523 [2024-12-13 12:39:39.199919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.523 [2024-12-13 12:39:39.199926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.523 [2024-12-13 12:39:39.199934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.523 [2024-12-13 12:39:39.199941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.523 [2024-12-13 12:39:39.199949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.523 [2024-12-13 12:39:39.199965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.523 [2024-12-13 12:39:39.199972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3290 is same with the state(6) to be set 00:34:11.523 [2024-12-13 12:39:39.209858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3290 (9): Bad file descriptor 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:11.523 12:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.523 [2024-12-13 12:39:39.219892] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:11.523 [2024-12-13 12:39:39.219905] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:11.523 [2024-12-13 12:39:39.219911] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:11.523 [2024-12-13 12:39:39.219916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:11.523 [2024-12-13 12:39:39.219936] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.901 [2024-12-13 12:39:40.278813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:12.901 [2024-12-13 12:39:40.278869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf3290 with addr=10.0.0.2, port=4420 00:34:12.901 [2024-12-13 12:39:40.278888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3290 is same with the state(6) to be set 00:34:12.901 [2024-12-13 12:39:40.278921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3290 (9): Bad file descriptor 00:34:12.901 [2024-12-13 12:39:40.279356] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:12.901 [2024-12-13 12:39:40.279385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:12.901 [2024-12-13 12:39:40.279395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:12.901 [2024-12-13 12:39:40.279407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:12.901 [2024-12-13 12:39:40.279417] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:12.901 [2024-12-13 12:39:40.279423] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:12.901 [2024-12-13 12:39:40.279429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:12.901 [2024-12-13 12:39:40.279440] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:12.901 [2024-12-13 12:39:40.279446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:12.901 12:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:13.837 [2024-12-13 12:39:41.281917] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:13.837 [2024-12-13 12:39:41.281942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:13.838 [2024-12-13 12:39:41.281955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:13.838 [2024-12-13 12:39:41.281962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:13.838 [2024-12-13 12:39:41.281970] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:13.838 [2024-12-13 12:39:41.281976] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:13.838 [2024-12-13 12:39:41.281981] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:13.838 [2024-12-13 12:39:41.281985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:13.838 [2024-12-13 12:39:41.282007] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:13.838 [2024-12-13 12:39:41.282032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.838 [2024-12-13 12:39:41.282041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.838 [2024-12-13 12:39:41.282052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.838 [2024-12-13 12:39:41.282059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.838 [2024-12-13 12:39:41.282066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.838 [2024-12-13 12:39:41.282073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.838 [2024-12-13 12:39:41.282079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.838 [2024-12-13 12:39:41.282086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.838 [2024-12-13 12:39:41.282093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.838 [2024-12-13 12:39:41.282099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.838 [2024-12-13 12:39:41.282105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:13.838 [2024-12-13 12:39:41.282131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce29e0 (9): Bad file descriptor 00:34:13.838 [2024-12-13 12:39:41.283128] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:13.838 [2024-12-13 12:39:41.283138] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:13.838 12:39:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:15.215 12:39:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.782 [2024-12-13 12:39:43.340930] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:15.782 [2024-12-13 12:39:43.340949] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:15.782 [2024-12-13 12:39:43.340961] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:15.782 [2024-12-13 12:39:43.427210] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:16.041 [2024-12-13 12:39:43.488691] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:16.042 [2024-12-13 12:39:43.489217] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xcf5540:1 started. 00:34:16.042 [2024-12-13 12:39:43.490237] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:16.042 [2024-12-13 12:39:43.490266] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:16.042 [2024-12-13 12:39:43.490282] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:16.042 [2024-12-13 12:39:43.490293] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:16.042 [2024-12-13 12:39:43.490299] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:16.042 [2024-12-13 12:39:43.498451] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xcf5540 was disconnected and freed. delete nvme_qpair. 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 501879 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501879 ']' 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501879 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501879 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501879' 00:34:16.042 killing process with pid 501879 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501879 00:34:16.042 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501879 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.301 rmmod nvme_tcp 00:34:16.301 rmmod nvme_fabrics 00:34:16.301 rmmod nvme_keyring 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 501708 ']' 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 501708 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501708 ']' 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501708 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501708 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501708' 00:34:16.301 killing process with pid 501708 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501708 00:34:16.301 12:39:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501708 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.560 12:39:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:18.466 00:34:18.466 real 0m20.282s 00:34:18.466 user 0m24.535s 00:34:18.466 sys 0m5.724s 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.466 ************************************ 00:34:18.466 END TEST nvmf_discovery_remove_ifc 00:34:18.466 ************************************ 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.466 12:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.726 ************************************ 00:34:18.726 START TEST nvmf_identify_kernel_target 00:34:18.726 ************************************ 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:18.726 * Looking for test storage... 00:34:18.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:18.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.726 --rc genhtml_branch_coverage=1 00:34:18.726 --rc genhtml_function_coverage=1 00:34:18.726 --rc genhtml_legend=1 00:34:18.726 --rc geninfo_all_blocks=1 00:34:18.726 --rc geninfo_unexecuted_blocks=1 00:34:18.726 00:34:18.726 ' 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:18.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.726 --rc genhtml_branch_coverage=1 00:34:18.726 --rc genhtml_function_coverage=1 00:34:18.726 --rc genhtml_legend=1 00:34:18.726 --rc geninfo_all_blocks=1 00:34:18.726 --rc geninfo_unexecuted_blocks=1 00:34:18.726 00:34:18.726 ' 00:34:18.726 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:18.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.726 --rc genhtml_branch_coverage=1 00:34:18.726 --rc genhtml_function_coverage=1 00:34:18.727 --rc genhtml_legend=1 00:34:18.727 --rc geninfo_all_blocks=1 00:34:18.727 --rc geninfo_unexecuted_blocks=1 00:34:18.727 00:34:18.727 ' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.727 --rc genhtml_branch_coverage=1 00:34:18.727 --rc genhtml_function_coverage=1 00:34:18.727 --rc genhtml_legend=1 00:34:18.727 --rc geninfo_all_blocks=1 00:34:18.727 --rc geninfo_unexecuted_blocks=1 00:34:18.727 00:34:18.727 ' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:18.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.727 12:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:25.301 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:25.301 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:25.301 Found net devices under 0000:af:00.0: cvl_0_0 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.301 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:25.302 Found net devices under 0000:af:00.1: cvl_0_1 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.302 12:39:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:34:25.302 00:34:25.302 --- 10.0.0.2 ping statistics --- 00:34:25.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.302 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:34:25.302 00:34:25.302 --- 10.0.0.1 ping statistics --- 00:34:25.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.302 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:25.302 12:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:27.210 Waiting for block devices as requested 00:34:27.469 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:27.469 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:27.469 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:27.728 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:27.728 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:27.728 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:27.986 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:27.986 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:27.986 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:27.986 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:28.245 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:28.245 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:28.245 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:28.504 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:28.504 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:28.504 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:28.504 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:28.763 No valid GPT data, bailing 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:28.763 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:28.764 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:29.024 00:34:29.024 Discovery Log Number of Records 2, Generation counter 2 00:34:29.024 =====Discovery Log Entry 0====== 00:34:29.024 trtype: tcp 00:34:29.024 adrfam: ipv4 00:34:29.024 subtype: current discovery subsystem 00:34:29.024 treq: not specified, sq flow control disable supported 00:34:29.024 portid: 1 00:34:29.024 trsvcid: 4420 00:34:29.024 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:29.024 traddr: 10.0.0.1 00:34:29.024 eflags: none 00:34:29.024 sectype: none 00:34:29.024 =====Discovery Log Entry 1====== 00:34:29.024 trtype: tcp 00:34:29.024 adrfam: ipv4 00:34:29.024 subtype: nvme subsystem 00:34:29.024 treq: not specified, sq flow control disable supported 00:34:29.024 portid: 1 00:34:29.024 trsvcid: 4420 00:34:29.024 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:29.024 traddr: 10.0.0.1 00:34:29.024 eflags: none 00:34:29.024 sectype: none 00:34:29.024 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:29.024 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:29.024 ===================================================== 00:34:29.024 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:29.024 ===================================================== 00:34:29.024 Controller Capabilities/Features 00:34:29.024 ================================ 00:34:29.024 Vendor ID: 0000 00:34:29.024 Subsystem Vendor ID: 0000 00:34:29.024 Serial Number: 57d2d5d696cf8ab487c7 00:34:29.024 Model Number: Linux 00:34:29.024 Firmware Version: 6.8.9-20 00:34:29.024 Recommended Arb Burst: 0 00:34:29.024 IEEE OUI Identifier: 00 00 00 00:34:29.024 Multi-path I/O 00:34:29.024 May have multiple subsystem ports: No 00:34:29.024 May have multiple controllers: No 00:34:29.024 Associated with SR-IOV VF: No 00:34:29.024 Max Data Transfer Size: Unlimited 00:34:29.024 Max Number of Namespaces: 0 00:34:29.024 Max Number of I/O Queues: 1024 00:34:29.024 NVMe Specification Version (VS): 1.3 00:34:29.024 NVMe Specification Version (Identify): 1.3 00:34:29.024 Maximum Queue Entries: 1024 00:34:29.024 Contiguous Queues Required: No 00:34:29.024 Arbitration Mechanisms Supported 00:34:29.024 Weighted Round Robin: Not Supported 00:34:29.024 Vendor Specific: Not Supported 00:34:29.024 Reset Timeout: 7500 ms 00:34:29.024 Doorbell Stride: 4 bytes 00:34:29.024 NVM Subsystem Reset: Not Supported 00:34:29.024 Command Sets Supported 00:34:29.024 NVM Command Set: Supported 00:34:29.024 Boot Partition: Not Supported 00:34:29.024 Memory Page Size Minimum: 4096 bytes 00:34:29.024 Memory Page Size Maximum: 4096 bytes 00:34:29.024 Persistent Memory Region: Not Supported 00:34:29.024 Optional Asynchronous Events Supported 00:34:29.024 Namespace Attribute Notices: Not Supported 00:34:29.024 Firmware Activation Notices: Not Supported 00:34:29.024 ANA Change Notices: Not Supported 00:34:29.024 PLE Aggregate Log Change Notices: Not Supported 00:34:29.024 LBA Status Info Alert Notices: Not Supported 00:34:29.024 EGE Aggregate Log Change Notices: Not Supported 00:34:29.024 Normal NVM Subsystem Shutdown event: Not Supported 00:34:29.024 Zone Descriptor Change Notices: Not Supported 00:34:29.024 Discovery Log Change Notices: Supported 00:34:29.024 Controller Attributes 00:34:29.024 128-bit Host Identifier: Not Supported 00:34:29.024 Non-Operational Permissive Mode: Not Supported 00:34:29.024 NVM Sets: Not Supported 00:34:29.024 Read Recovery Levels: Not Supported 00:34:29.024 Endurance Groups: Not Supported 00:34:29.024 Predictable Latency Mode: Not Supported 00:34:29.024 Traffic Based Keep ALive: Not Supported 00:34:29.025 Namespace Granularity: Not Supported 00:34:29.025 SQ Associations: Not Supported 00:34:29.025 UUID List: Not Supported 00:34:29.025 Multi-Domain Subsystem: Not Supported 00:34:29.025 Fixed Capacity Management: Not Supported 00:34:29.025 Variable Capacity Management: Not Supported 00:34:29.025 Delete Endurance Group: Not Supported 00:34:29.025 Delete NVM Set: Not Supported 00:34:29.025 Extended LBA Formats Supported: Not Supported 00:34:29.025 Flexible Data Placement Supported: Not Supported 00:34:29.025 00:34:29.025 Controller Memory Buffer Support 00:34:29.025 ================================ 00:34:29.025 Supported: No 00:34:29.025 00:34:29.025 Persistent Memory Region Support 00:34:29.025 ================================ 00:34:29.025 Supported: No 00:34:29.025 00:34:29.025 Admin Command Set Attributes 00:34:29.025 ============================ 00:34:29.025 Security Send/Receive: Not Supported 00:34:29.025 Format NVM: Not Supported 00:34:29.025 Firmware Activate/Download: Not Supported 00:34:29.025 Namespace Management: Not Supported 00:34:29.025 Device Self-Test: Not Supported 00:34:29.025 Directives: Not Supported 00:34:29.025 NVMe-MI: Not Supported 00:34:29.025 Virtualization Management: Not Supported 00:34:29.025 Doorbell Buffer Config: Not Supported 00:34:29.025 Get LBA Status Capability: Not Supported 00:34:29.025 Command & Feature Lockdown Capability: Not Supported 00:34:29.025 Abort Command Limit: 1 00:34:29.025 Async Event Request Limit: 1 00:34:29.025 Number of Firmware Slots: N/A 00:34:29.025 Firmware Slot 1 Read-Only: N/A 00:34:29.025 Firmware Activation Without Reset: N/A 00:34:29.025 Multiple Update Detection Support: N/A 00:34:29.025 Firmware Update Granularity: No Information Provided 00:34:29.025 Per-Namespace SMART Log: No 00:34:29.025 Asymmetric Namespace Access Log Page: Not Supported 00:34:29.025 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:29.025 Command Effects Log Page: Not Supported 00:34:29.025 Get Log Page Extended Data: Supported 00:34:29.025 Telemetry Log Pages: Not Supported 00:34:29.025 Persistent Event Log Pages: Not Supported 00:34:29.025 Supported Log Pages Log Page: May Support 00:34:29.025 Commands Supported & Effects Log Page: Not Supported 00:34:29.025 Feature Identifiers & Effects Log Page:May Support 00:34:29.025 NVMe-MI Commands & Effects Log Page: May Support 00:34:29.025 Data Area 4 for Telemetry Log: Not Supported 00:34:29.025 Error Log Page Entries Supported: 1 00:34:29.025 Keep Alive: Not Supported 00:34:29.025 00:34:29.025 NVM Command Set Attributes 00:34:29.025 ========================== 00:34:29.025 Submission Queue Entry Size 00:34:29.025 Max: 1 00:34:29.025 Min: 1 00:34:29.025 Completion Queue Entry Size 00:34:29.025 Max: 1 00:34:29.025 Min: 1 00:34:29.025 Number of Namespaces: 0 00:34:29.025 Compare Command: Not Supported 00:34:29.025 Write Uncorrectable Command: Not Supported 00:34:29.025 Dataset Management Command: Not Supported 00:34:29.025 Write Zeroes Command: Not Supported 00:34:29.025 Set Features Save Field: Not Supported 00:34:29.025 Reservations: Not Supported 00:34:29.025 Timestamp: Not Supported 00:34:29.025 Copy: Not Supported 00:34:29.025 Volatile Write Cache: Not Present 00:34:29.025 Atomic Write Unit (Normal): 1 00:34:29.025 Atomic Write Unit (PFail): 1 00:34:29.025 Atomic Compare & Write Unit: 1 00:34:29.025 Fused Compare & Write: Not Supported 00:34:29.025 Scatter-Gather List 00:34:29.025 SGL Command Set: Supported 00:34:29.025 SGL Keyed: Not Supported 00:34:29.025 SGL Bit Bucket Descriptor: Not Supported 00:34:29.025 SGL Metadata Pointer: Not Supported 00:34:29.025 Oversized SGL: Not Supported 00:34:29.025 SGL Metadata Address: Not Supported 00:34:29.025 SGL Offset: Supported 00:34:29.025 Transport SGL Data Block: Not Supported 00:34:29.025 Replay Protected Memory Block: Not Supported 00:34:29.025 00:34:29.025 Firmware Slot Information 00:34:29.025 ========================= 00:34:29.025 Active slot: 0 00:34:29.025 00:34:29.025 00:34:29.025 Error Log 00:34:29.025 ========= 00:34:29.025 00:34:29.025 Active Namespaces 00:34:29.025 ================= 00:34:29.025 Discovery Log Page 00:34:29.025 ================== 00:34:29.025 Generation Counter: 2 00:34:29.025 Number of Records: 2 00:34:29.025 Record Format: 0 00:34:29.025 00:34:29.025 Discovery Log Entry 0 00:34:29.025 ---------------------- 00:34:29.025 Transport Type: 3 (TCP) 00:34:29.025 Address Family: 1 (IPv4) 00:34:29.025 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:29.025 Entry Flags: 00:34:29.025 Duplicate Returned Information: 0 00:34:29.025 Explicit Persistent Connection Support for Discovery: 0 00:34:29.025 Transport Requirements: 00:34:29.025 Secure Channel: Not Specified 00:34:29.025 Port ID: 1 (0x0001) 00:34:29.025 Controller ID: 65535 (0xffff) 00:34:29.025 Admin Max SQ Size: 32 00:34:29.025 Transport Service Identifier: 4420 00:34:29.025 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:29.025 Transport Address: 10.0.0.1 00:34:29.025 Discovery Log Entry 1 00:34:29.025 ---------------------- 00:34:29.025 Transport Type: 3 (TCP) 00:34:29.025 Address Family: 1 (IPv4) 00:34:29.025 Subsystem Type: 2 (NVM Subsystem) 00:34:29.025 Entry Flags: 00:34:29.025 Duplicate Returned Information: 0 00:34:29.025 Explicit Persistent Connection Support for Discovery: 0 00:34:29.025 Transport Requirements: 00:34:29.025 Secure Channel: Not Specified 00:34:29.025 Port ID: 1 (0x0001) 00:34:29.025 Controller ID: 65535 (0xffff) 00:34:29.025 Admin Max SQ Size: 32 00:34:29.025 Transport Service Identifier: 4420 00:34:29.025 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:29.025 Transport Address: 10.0.0.1 00:34:29.025 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.025 get_feature(0x01) failed 00:34:29.025 get_feature(0x02) failed 00:34:29.025 get_feature(0x04) failed 00:34:29.025 ===================================================== 00:34:29.025 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:29.025 ===================================================== 00:34:29.025 Controller Capabilities/Features 00:34:29.025 ================================ 00:34:29.025 Vendor ID: 0000 00:34:29.025 Subsystem Vendor ID: 0000 00:34:29.025 Serial Number: f57a7495d2d7d5d8e62c 00:34:29.025 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.025 Firmware Version: 6.8.9-20 00:34:29.025 Recommended Arb Burst: 6 00:34:29.025 IEEE OUI Identifier: 00 00 00 00:34:29.025 Multi-path I/O 00:34:29.025 May have multiple subsystem ports: Yes 00:34:29.025 May have multiple controllers: Yes 00:34:29.025 Associated with SR-IOV VF: No 00:34:29.025 Max Data Transfer Size: Unlimited 00:34:29.025 Max Number of Namespaces: 1024 00:34:29.025 Max Number of I/O Queues: 128 00:34:29.025 NVMe Specification Version (VS): 1.3 00:34:29.025 NVMe Specification Version (Identify): 1.3 00:34:29.025 Maximum Queue Entries: 1024 00:34:29.025 Contiguous Queues Required: No 00:34:29.025 Arbitration Mechanisms Supported 00:34:29.025 Weighted Round Robin: Not Supported 00:34:29.025 Vendor Specific: Not Supported 00:34:29.025 Reset Timeout: 7500 ms 00:34:29.025 Doorbell Stride: 4 bytes 00:34:29.025 NVM Subsystem Reset: Not Supported 00:34:29.025 Command Sets Supported 00:34:29.025 NVM Command Set: Supported 00:34:29.025 Boot Partition: Not Supported 00:34:29.025 Memory Page Size Minimum: 4096 bytes 00:34:29.025 Memory Page Size Maximum: 4096 bytes 00:34:29.025 Persistent Memory Region: Not Supported 00:34:29.025 Optional Asynchronous Events Supported 00:34:29.025 Namespace Attribute Notices: Supported 00:34:29.025 Firmware Activation Notices: Not Supported 00:34:29.025 ANA Change Notices: Supported 00:34:29.025 PLE Aggregate Log Change Notices: Not Supported 00:34:29.025 LBA Status Info Alert Notices: Not Supported 00:34:29.025 EGE Aggregate Log Change Notices: Not Supported 00:34:29.025 Normal NVM Subsystem Shutdown event: Not Supported 00:34:29.025 Zone Descriptor Change Notices: Not Supported 00:34:29.025 Discovery Log Change Notices: Not Supported 00:34:29.025 Controller Attributes 00:34:29.025 128-bit Host Identifier: Supported 00:34:29.025 Non-Operational Permissive Mode: Not Supported 00:34:29.025 NVM Sets: Not Supported 00:34:29.025 Read Recovery Levels: Not Supported 00:34:29.025 Endurance Groups: Not Supported 00:34:29.025 Predictable Latency Mode: Not Supported 00:34:29.025 Traffic Based Keep ALive: Supported 00:34:29.025 Namespace Granularity: Not Supported 00:34:29.025 SQ Associations: Not Supported 00:34:29.025 UUID List: Not Supported 00:34:29.025 Multi-Domain Subsystem: Not Supported 00:34:29.025 Fixed Capacity Management: Not Supported 00:34:29.025 Variable Capacity Management: Not Supported 00:34:29.025 Delete Endurance Group: Not Supported 00:34:29.025 Delete NVM Set: Not Supported 00:34:29.025 Extended LBA Formats Supported: Not Supported 00:34:29.025 Flexible Data Placement Supported: Not Supported 00:34:29.026 00:34:29.026 Controller Memory Buffer Support 00:34:29.026 ================================ 00:34:29.026 Supported: No 00:34:29.026 00:34:29.026 Persistent Memory Region Support 00:34:29.026 ================================ 00:34:29.026 Supported: No 00:34:29.026 00:34:29.026 Admin Command Set Attributes 00:34:29.026 ============================ 00:34:29.026 Security Send/Receive: Not Supported 00:34:29.026 Format NVM: Not Supported 00:34:29.026 Firmware Activate/Download: Not Supported 00:34:29.026 Namespace Management: Not Supported 00:34:29.026 Device Self-Test: Not Supported 00:34:29.026 Directives: Not Supported 00:34:29.026 NVMe-MI: Not Supported 00:34:29.026 Virtualization Management: Not Supported 00:34:29.026 Doorbell Buffer Config: Not Supported 00:34:29.026 Get LBA Status Capability: Not Supported 00:34:29.026 Command & Feature Lockdown Capability: Not Supported 00:34:29.026 Abort Command Limit: 4 00:34:29.026 Async Event Request Limit: 4 00:34:29.026 Number of Firmware Slots: N/A 00:34:29.026 Firmware Slot 1 Read-Only: N/A 00:34:29.026 Firmware Activation Without Reset: N/A 00:34:29.026 Multiple Update Detection Support: N/A 00:34:29.026 Firmware Update Granularity: No Information Provided 00:34:29.026 Per-Namespace SMART Log: Yes 00:34:29.026 Asymmetric Namespace Access Log Page: Supported 00:34:29.026 ANA Transition Time : 10 sec 00:34:29.026 00:34:29.026 Asymmetric Namespace Access Capabilities 00:34:29.026 ANA Optimized State : Supported 00:34:29.026 ANA Non-Optimized State : Supported 00:34:29.026 ANA Inaccessible State : Supported 00:34:29.026 ANA Persistent Loss State : Supported 00:34:29.026 ANA Change State : Supported 00:34:29.026 ANAGRPID is not changed : No 00:34:29.026 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:29.026 00:34:29.026 ANA Group Identifier Maximum : 128 00:34:29.026 Number of ANA Group Identifiers : 128 00:34:29.026 Max Number of Allowed Namespaces : 1024 00:34:29.026 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:29.026 Command Effects Log Page: Supported 00:34:29.026 Get Log Page Extended Data: Supported 00:34:29.026 Telemetry Log Pages: Not Supported 00:34:29.026 Persistent Event Log Pages: Not Supported 00:34:29.026 Supported Log Pages Log Page: May Support 00:34:29.026 Commands Supported & Effects Log Page: Not Supported 00:34:29.026 Feature Identifiers & Effects Log Page:May Support 00:34:29.026 NVMe-MI Commands & Effects Log Page: May Support 00:34:29.026 Data Area 4 for Telemetry Log: Not Supported 00:34:29.026 Error Log Page Entries Supported: 128 00:34:29.026 Keep Alive: Supported 00:34:29.026 Keep Alive Granularity: 1000 ms 00:34:29.026 00:34:29.026 NVM Command Set Attributes 00:34:29.026 ========================== 00:34:29.026 Submission Queue Entry Size 00:34:29.026 Max: 64 00:34:29.026 Min: 64 00:34:29.026 Completion Queue Entry Size 00:34:29.026 Max: 16 00:34:29.026 Min: 16 00:34:29.026 Number of Namespaces: 1024 00:34:29.026 Compare Command: Not Supported 00:34:29.026 Write Uncorrectable Command: Not Supported 00:34:29.026 Dataset Management Command: Supported 00:34:29.026 Write Zeroes Command: Supported 00:34:29.026 Set Features Save Field: Not Supported 00:34:29.026 Reservations: Not Supported 00:34:29.026 Timestamp: Not Supported 00:34:29.026 Copy: Not Supported 00:34:29.026 Volatile Write Cache: Present 00:34:29.026 Atomic Write Unit (Normal): 1 00:34:29.026 Atomic Write Unit (PFail): 1 00:34:29.026 Atomic Compare & Write Unit: 1 00:34:29.026 Fused Compare & Write: Not Supported 00:34:29.026 Scatter-Gather List 00:34:29.026 SGL Command Set: Supported 00:34:29.026 SGL Keyed: Not Supported 00:34:29.026 SGL Bit Bucket Descriptor: Not Supported 00:34:29.026 SGL Metadata Pointer: Not Supported 00:34:29.026 Oversized SGL: Not Supported 00:34:29.026 SGL Metadata Address: Not Supported 00:34:29.026 SGL Offset: Supported 00:34:29.026 Transport SGL Data Block: Not Supported 00:34:29.026 Replay Protected Memory Block: Not Supported 00:34:29.026 00:34:29.026 Firmware Slot Information 00:34:29.026 ========================= 00:34:29.026 Active slot: 0 00:34:29.026 00:34:29.026 Asymmetric Namespace Access 00:34:29.026 =========================== 00:34:29.026 Change Count : 0 00:34:29.026 Number of ANA Group Descriptors : 1 00:34:29.026 ANA Group Descriptor : 0 00:34:29.026 ANA Group ID : 1 00:34:29.026 Number of NSID Values : 1 00:34:29.026 Change Count : 0 00:34:29.026 ANA State : 1 00:34:29.026 Namespace Identifier : 1 00:34:29.026 00:34:29.026 Commands Supported and Effects 00:34:29.026 ============================== 00:34:29.026 Admin Commands 00:34:29.026 -------------- 00:34:29.026 Get Log Page (02h): Supported 00:34:29.026 Identify (06h): Supported 00:34:29.026 Abort (08h): Supported 00:34:29.026 Set Features (09h): Supported 00:34:29.026 Get Features (0Ah): Supported 00:34:29.026 Asynchronous Event Request (0Ch): Supported 00:34:29.026 Keep Alive (18h): Supported 00:34:29.026 I/O Commands 00:34:29.026 ------------ 00:34:29.026 Flush (00h): Supported 00:34:29.026 Write (01h): Supported LBA-Change 00:34:29.026 Read (02h): Supported 00:34:29.026 Write Zeroes (08h): Supported LBA-Change 00:34:29.026 Dataset Management (09h): Supported 00:34:29.026 00:34:29.026 Error Log 00:34:29.026 ========= 00:34:29.026 Entry: 0 00:34:29.026 Error Count: 0x3 00:34:29.026 Submission Queue Id: 0x0 00:34:29.026 Command Id: 0x5 00:34:29.026 Phase Bit: 0 00:34:29.026 Status Code: 0x2 00:34:29.026 Status Code Type: 0x0 00:34:29.026 Do Not Retry: 1 00:34:29.026 Error Location: 0x28 00:34:29.026 LBA: 0x0 00:34:29.026 Namespace: 0x0 00:34:29.026 Vendor Log Page: 0x0 00:34:29.026 ----------- 00:34:29.026 Entry: 1 00:34:29.026 Error Count: 0x2 00:34:29.026 Submission Queue Id: 0x0 00:34:29.026 Command Id: 0x5 00:34:29.026 Phase Bit: 0 00:34:29.026 Status Code: 0x2 00:34:29.026 Status Code Type: 0x0 00:34:29.026 Do Not Retry: 1 00:34:29.026 Error Location: 0x28 00:34:29.026 LBA: 0x0 00:34:29.026 Namespace: 0x0 00:34:29.026 Vendor Log Page: 0x0 00:34:29.026 ----------- 00:34:29.026 Entry: 2 00:34:29.026 Error Count: 0x1 00:34:29.026 Submission Queue Id: 0x0 00:34:29.026 Command Id: 0x4 00:34:29.026 Phase Bit: 0 00:34:29.026 Status Code: 0x2 00:34:29.026 Status Code Type: 0x0 00:34:29.026 Do Not Retry: 1 00:34:29.026 Error Location: 0x28 00:34:29.026 LBA: 0x0 00:34:29.026 Namespace: 0x0 00:34:29.026 Vendor Log Page: 0x0 00:34:29.026 00:34:29.026 Number of Queues 00:34:29.026 ================ 00:34:29.026 Number of I/O Submission Queues: 128 00:34:29.026 Number of I/O Completion Queues: 128 00:34:29.026 00:34:29.026 ZNS Specific Controller Data 00:34:29.026 ============================ 00:34:29.026 Zone Append Size Limit: 0 00:34:29.026 00:34:29.026 00:34:29.026 Active Namespaces 00:34:29.026 ================= 00:34:29.026 get_feature(0x05) failed 00:34:29.026 Namespace ID:1 00:34:29.026 Command Set Identifier: NVM (00h) 00:34:29.026 Deallocate: Supported 00:34:29.026 Deallocated/Unwritten Error: Not Supported 00:34:29.026 Deallocated Read Value: Unknown 00:34:29.026 Deallocate in Write Zeroes: Not Supported 00:34:29.026 Deallocated Guard Field: 0xFFFF 00:34:29.026 Flush: Supported 00:34:29.026 Reservation: Not Supported 00:34:29.026 Namespace Sharing Capabilities: Multiple Controllers 00:34:29.026 Size (in LBAs): 1953525168 (931GiB) 00:34:29.026 Capacity (in LBAs): 1953525168 (931GiB) 00:34:29.026 Utilization (in LBAs): 1953525168 (931GiB) 00:34:29.026 UUID: a62e63c2-06cd-4275-aed0-8579ed09bdc8 00:34:29.026 Thin Provisioning: Not Supported 00:34:29.026 Per-NS Atomic Units: Yes 00:34:29.026 Atomic Boundary Size (Normal): 0 00:34:29.026 Atomic Boundary Size (PFail): 0 00:34:29.026 Atomic Boundary Offset: 0 00:34:29.026 NGUID/EUI64 Never Reused: No 00:34:29.026 ANA group ID: 1 00:34:29.026 Namespace Write Protected: No 00:34:29.026 Number of LBA Formats: 1 00:34:29.026 Current LBA Format: LBA Format #00 00:34:29.026 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:29.026 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.026 rmmod nvme_tcp 00:34:29.026 rmmod nvme_fabrics 00:34:29.026 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.027 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.286 12:39:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:31.193 12:39:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:34.484 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:34.484 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:35.053 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:35.053 00:34:35.053 real 0m16.521s 00:34:35.053 user 0m4.412s 00:34:35.053 sys 0m8.500s 00:34:35.053 12:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.053 12:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.053 ************************************ 00:34:35.053 END TEST nvmf_identify_kernel_target 00:34:35.053 ************************************ 00:34:35.053 12:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:35.053 12:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:35.053 12:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.053 12:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.313 ************************************ 00:34:35.313 START TEST nvmf_auth_host 00:34:35.313 ************************************ 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:35.313 * Looking for test storage... 00:34:35.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:35.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.313 --rc genhtml_branch_coverage=1 00:34:35.313 --rc genhtml_function_coverage=1 00:34:35.313 --rc genhtml_legend=1 00:34:35.313 --rc geninfo_all_blocks=1 00:34:35.313 --rc geninfo_unexecuted_blocks=1 00:34:35.313 00:34:35.313 ' 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:35.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.313 --rc genhtml_branch_coverage=1 00:34:35.313 --rc genhtml_function_coverage=1 00:34:35.313 --rc genhtml_legend=1 00:34:35.313 --rc geninfo_all_blocks=1 00:34:35.313 --rc geninfo_unexecuted_blocks=1 00:34:35.313 00:34:35.313 ' 00:34:35.313 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:35.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.313 --rc genhtml_branch_coverage=1 00:34:35.313 --rc genhtml_function_coverage=1 00:34:35.313 --rc genhtml_legend=1 00:34:35.313 --rc geninfo_all_blocks=1 00:34:35.313 --rc geninfo_unexecuted_blocks=1 00:34:35.314 00:34:35.314 ' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:35.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.314 --rc genhtml_branch_coverage=1 00:34:35.314 --rc genhtml_function_coverage=1 00:34:35.314 --rc genhtml_legend=1 00:34:35.314 --rc geninfo_all_blocks=1 00:34:35.314 --rc geninfo_unexecuted_blocks=1 00:34:35.314 00:34:35.314 ' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:35.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.314 12:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:41.887 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:41.888 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:41.888 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:41.888 Found net devices under 0000:af:00.0: cvl_0_0 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:41.888 Found net devices under 0000:af:00.1: cvl_0_1 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:34:41.888 00:34:41.888 --- 10.0.0.2 ping statistics --- 00:34:41.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.888 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:34:41.888 00:34:41.888 --- 10.0.0.1 ping statistics --- 00:34:41.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.888 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:41.888 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=513478 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 513478 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513478 ']' 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.889 12:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=57f3655ea38803dc3e40fd94ef54f599 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8Th 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 57f3655ea38803dc3e40fd94ef54f599 0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 57f3655ea38803dc3e40fd94ef54f599 0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=57f3655ea38803dc3e40fd94ef54f599 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8Th 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8Th 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8Th 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81e9e9218905c6d1dc0fbc97fda1f961e87cb7dad351e78ab2c141f65027dd96 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.peL 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81e9e9218905c6d1dc0fbc97fda1f961e87cb7dad351e78ab2c141f65027dd96 3 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81e9e9218905c6d1dc0fbc97fda1f961e87cb7dad351e78ab2c141f65027dd96 3 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81e9e9218905c6d1dc0fbc97fda1f961e87cb7dad351e78ab2c141f65027dd96 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.peL 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.peL 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.peL 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=545bf863eb6e770edf04b5eb0b62ca5ae8015af4b3524b62 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TES 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 545bf863eb6e770edf04b5eb0b62ca5ae8015af4b3524b62 0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 545bf863eb6e770edf04b5eb0b62ca5ae8015af4b3524b62 0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=545bf863eb6e770edf04b5eb0b62ca5ae8015af4b3524b62 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TES 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TES 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.TES 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=873fe26bed6998a54fb50e75eda79e767f50bb6fbd38bf24 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tZW 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 873fe26bed6998a54fb50e75eda79e767f50bb6fbd38bf24 2 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 873fe26bed6998a54fb50e75eda79e767f50bb6fbd38bf24 2 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=873fe26bed6998a54fb50e75eda79e767f50bb6fbd38bf24 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tZW 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tZW 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tZW 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4636ef121f14276ff0bd3d9e48fc2a4 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mwP 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4636ef121f14276ff0bd3d9e48fc2a4 1 00:34:41.889 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4636ef121f14276ff0bd3d9e48fc2a4 1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4636ef121f14276ff0bd3d9e48fc2a4 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mwP 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mwP 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.mwP 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e45dfbf003246ee55f1c7e53259936e 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MLN 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e45dfbf003246ee55f1c7e53259936e 1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e45dfbf003246ee55f1c7e53259936e 1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e45dfbf003246ee55f1c7e53259936e 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MLN 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MLN 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MLN 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1217647e1183eecda8da7dfd7624ff0e83564a809063bffa 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fHY 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1217647e1183eecda8da7dfd7624ff0e83564a809063bffa 2 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1217647e1183eecda8da7dfd7624ff0e83564a809063bffa 2 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1217647e1183eecda8da7dfd7624ff0e83564a809063bffa 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fHY 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fHY 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fHY 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:41.890 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fb023a426960e7faafd4561080122ab5 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.i7y 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fb023a426960e7faafd4561080122ab5 0 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fb023a426960e7faafd4561080122ab5 0 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fb023a426960e7faafd4561080122ab5 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.i7y 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.i7y 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.i7y 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7405499685fe6ab422862f3af8b35d5241b7cb7dd102b7a8fdd8c087203fbede 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Omh 00:34:42.149 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7405499685fe6ab422862f3af8b35d5241b7cb7dd102b7a8fdd8c087203fbede 3 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7405499685fe6ab422862f3af8b35d5241b7cb7dd102b7a8fdd8c087203fbede 3 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7405499685fe6ab422862f3af8b35d5241b7cb7dd102b7a8fdd8c087203fbede 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Omh 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Omh 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Omh 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 513478 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513478 ']' 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.150 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8Th 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.peL ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.peL 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.TES 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tZW ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tZW 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.mwP 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MLN ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MLN 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fHY 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.i7y ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.i7y 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Omh 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:42.409 12:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:42.409 12:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:44.944 Waiting for block devices as requested 00:34:45.203 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:45.203 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:45.203 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:45.462 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:45.462 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:45.462 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.462 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.720 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.720 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:45.720 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:45.979 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:45.979 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:45.979 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:45.979 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:46.238 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:46.238 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:46.238 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:46.806 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:47.065 No valid GPT data, bailing 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:47.065 00:34:47.065 Discovery Log Number of Records 2, Generation counter 2 00:34:47.065 =====Discovery Log Entry 0====== 00:34:47.065 trtype: tcp 00:34:47.065 adrfam: ipv4 00:34:47.065 subtype: current discovery subsystem 00:34:47.065 treq: not specified, sq flow control disable supported 00:34:47.065 portid: 1 00:34:47.065 trsvcid: 4420 00:34:47.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:47.065 traddr: 10.0.0.1 00:34:47.065 eflags: none 00:34:47.065 sectype: none 00:34:47.065 =====Discovery Log Entry 1====== 00:34:47.065 trtype: tcp 00:34:47.065 adrfam: ipv4 00:34:47.065 subtype: nvme subsystem 00:34:47.065 treq: not specified, sq flow control disable supported 00:34:47.065 portid: 1 00:34:47.065 trsvcid: 4420 00:34:47.065 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:47.065 traddr: 10.0.0.1 00:34:47.065 eflags: none 00:34:47.065 sectype: none 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.065 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.324 nvme0n1 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.324 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.325 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.325 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.325 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.325 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.325 12:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.325 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.582 nvme0n1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.582 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.839 nvme0n1 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.840 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.098 nvme0n1 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.098 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.099 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.357 nvme0n1 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:48.357 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.358 12:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.358 nvme0n1 00:34:48.358 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.618 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.877 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.878 nvme0n1 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.878 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.137 nvme0n1 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.137 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.397 12:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.397 nvme0n1 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.397 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.398 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.398 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.398 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.657 nvme0n1 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.657 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.916 nvme0n1 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.916 12:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.484 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.743 nvme0n1 00:34:50.743 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.743 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.743 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.744 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.003 nvme0n1 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:51.003 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.004 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.263 nvme0n1 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.263 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.522 12:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.522 nvme0n1 00:34:51.522 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.522 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.522 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.522 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.522 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.781 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.782 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.039 nvme0n1 00:34:52.039 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.039 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.039 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.040 12:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.418 12:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.678 nvme0n1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.678 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.248 nvme0n1 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.248 12:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.507 nvme0n1 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.507 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:54.766 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.767 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.026 nvme0n1 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.026 12:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.594 nvme0n1 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.594 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.163 nvme0n1 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.163 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.164 12:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.732 nvme0n1 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.732 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 nvme0n1 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.299 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:57.558 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:57.558 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.558 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:57.558 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:57.558 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:34:57.558 12:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.558 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.124 nvme0n1 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.124 12:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.692 nvme0n1 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.692 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.952 nvme0n1 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:58.952 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.953 nvme0n1 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.953 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.212 nvme0n1 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.212 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.470 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.470 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.470 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.471 12:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.471 nvme0n1 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.471 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.729 nvme0n1 00:34:59.729 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.729 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.729 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.729 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.730 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.988 nvme0n1 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.988 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.989 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.247 nvme0n1 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:00.247 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.248 12:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.506 nvme0n1 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.506 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.507 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.766 nvme0n1 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.766 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.025 nvme0n1 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.025 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.026 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.285 nvme0n1 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.285 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.286 12:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.545 nvme0n1 00:35:01.545 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.545 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.545 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.545 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.545 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.545 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.802 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.060 nvme0n1 00:35:02.060 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.061 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.319 nvme0n1 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.319 12:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.578 nvme0n1 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.578 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.579 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.147 nvme0n1 00:35:03.147 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.147 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.147 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.147 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.147 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.147 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.148 12:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.716 nvme0n1 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.716 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.717 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.976 nvme0n1 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.976 12:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.544 nvme0n1 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:04.544 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.545 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.804 nvme0n1 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.804 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.063 12:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.633 nvme0n1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.633 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.202 nvme0n1 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.202 12:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.769 nvme0n1 00:35:06.769 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.769 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.769 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.769 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.769 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.769 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.028 12:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.599 nvme0n1 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.599 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.600 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.169 nvme0n1 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.169 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.170 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.429 nvme0n1 00:35:08.429 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.429 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.430 12:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.430 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.690 nvme0n1 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.690 nvme0n1 00:35:08.690 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:08.949 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.950 nvme0n1 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.950 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.209 nvme0n1 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.209 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.468 12:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.468 nvme0n1 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.468 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.727 nvme0n1 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.727 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.986 nvme0n1 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.986 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.245 nvme0n1 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.245 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.246 12:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.505 nvme0n1 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.505 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.506 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.765 nvme0n1 00:35:10.765 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.765 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.765 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.765 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.765 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.765 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.025 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.284 nvme0n1 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.284 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.285 12:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 nvme0n1 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.544 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.545 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.804 nvme0n1 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.804 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.063 nvme0n1 00:35:12.063 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.063 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.063 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.063 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.063 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.063 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.322 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.323 12:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 nvme0n1 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:12.582 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.583 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.151 nvme0n1 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.151 12:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.410 nvme0n1 00:35:13.410 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.410 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.410 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.410 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.410 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.410 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.669 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:13.670 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.670 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.929 nvme0n1 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.929 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.497 nvme0n1 00:35:14.497 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.497 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.497 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.497 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.497 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.497 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTdmMzY1NWVhMzg4MDNkYzNlNDBmZDk0ZWY1NGY1OTnXNYKu: 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFlOWU5MjE4OTA1YzZkMWRjMGZiYzk3ZmRhMWY5NjFlODdjYjdkYWQzNTFlNzhhYjJjMTQxZjY1MDI3ZGQ5NmHpllk=: 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.497 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.066 nvme0n1 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:15.066 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.067 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.635 nvme0n1 00:35:15.635 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.635 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.635 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.635 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.635 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.635 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.894 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.895 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.895 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.895 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.895 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.463 nvme0n1 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTIxNzY0N2UxMTgzZWVjZGE4ZGE3ZGZkNzYyNGZmMGU4MzU2NGE4MDkwNjNiZmZhBrSHTw==: 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmIwMjNhNDI2OTYwZTdmYWFmZDQ1NjEwODAxMjJhYjUgMU7d: 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.463 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.463 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:16.463 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.463 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.032 nvme0n1 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzQwNTQ5OTY4NWZlNmFiNDIyODYyZjNhZjhiMzVkNTI0MWI3Y2I3ZGQxMDJiN2E4ZmRkOGMwODcyMDNmYmVkZdVJa0I=: 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.032 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.600 nvme0n1 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:17.600 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.601 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:17.601 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.601 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.859 request: 00:35:17.859 { 00:35:17.859 "name": "nvme0", 00:35:17.859 "trtype": "tcp", 00:35:17.859 "traddr": "10.0.0.1", 00:35:17.859 "adrfam": "ipv4", 00:35:17.859 "trsvcid": "4420", 00:35:17.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:17.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:17.859 "prchk_reftag": false, 00:35:17.859 "prchk_guard": false, 00:35:17.859 "hdgst": false, 00:35:17.859 "ddgst": false, 00:35:17.859 "allow_unrecognized_csi": false, 00:35:17.859 "method": "bdev_nvme_attach_controller", 00:35:17.859 "req_id": 1 00:35:17.859 } 00:35:17.859 Got JSON-RPC error response 00:35:17.859 response: 00:35:17.859 { 00:35:17.859 "code": -5, 00:35:17.859 "message": "Input/output error" 00:35:17.859 } 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.859 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.859 request: 00:35:17.859 { 00:35:17.859 "name": "nvme0", 00:35:17.859 "trtype": "tcp", 00:35:17.859 "traddr": "10.0.0.1", 00:35:17.859 "adrfam": "ipv4", 00:35:17.859 "trsvcid": "4420", 00:35:17.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:17.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:17.859 "prchk_reftag": false, 00:35:17.859 "prchk_guard": false, 00:35:17.859 "hdgst": false, 00:35:17.859 "ddgst": false, 00:35:17.859 "dhchap_key": "key2", 00:35:17.859 "allow_unrecognized_csi": false, 00:35:17.859 "method": "bdev_nvme_attach_controller", 00:35:17.859 "req_id": 1 00:35:17.859 } 00:35:17.859 Got JSON-RPC error response 00:35:17.860 response: 00:35:17.860 { 00:35:17.860 "code": -5, 00:35:17.860 "message": "Input/output error" 00:35:17.860 } 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.860 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.119 request: 00:35:18.119 { 00:35:18.119 "name": "nvme0", 00:35:18.119 "trtype": "tcp", 00:35:18.119 "traddr": "10.0.0.1", 00:35:18.119 "adrfam": "ipv4", 00:35:18.119 "trsvcid": "4420", 00:35:18.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:18.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:18.119 "prchk_reftag": false, 00:35:18.119 "prchk_guard": false, 00:35:18.119 "hdgst": false, 00:35:18.119 "ddgst": false, 00:35:18.119 "dhchap_key": "key1", 00:35:18.119 "dhchap_ctrlr_key": "ckey2", 00:35:18.119 "allow_unrecognized_csi": false, 00:35:18.119 "method": "bdev_nvme_attach_controller", 00:35:18.119 "req_id": 1 00:35:18.119 } 00:35:18.119 Got JSON-RPC error response 00:35:18.119 response: 00:35:18.119 { 00:35:18.119 "code": -5, 00:35:18.119 "message": "Input/output error" 00:35:18.119 } 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.119 nvme0n1 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.119 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.378 request: 00:35:18.378 { 00:35:18.378 "name": "nvme0", 00:35:18.378 "dhchap_key": "key1", 00:35:18.378 "dhchap_ctrlr_key": "ckey2", 00:35:18.378 "method": "bdev_nvme_set_keys", 00:35:18.378 "req_id": 1 00:35:18.378 } 00:35:18.378 Got JSON-RPC error response 00:35:18.378 response: 00:35:18.378 { 00:35:18.378 "code": -13, 00:35:18.378 "message": "Permission denied" 00:35:18.378 } 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:18.378 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:19.315 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:20.693 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.693 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:20.693 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.693 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.693 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ1YmY4NjNlYjZlNzcwZWRmMDRiNWViMGI2MmNhNWFlODAxNWFmNGIzNTI0YjYy32L8Fg==: 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: ]] 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODczZmUyNmJlZDY5OThhNTRmYjUwZTc1ZWRhNzllNzY3ZjUwYmI2ZmJkMzhiZjI0FUn7lw==: 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.693 nvme0n1 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:20.693 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ2MzZlZjEyMWYxNDI3NmZmMGJkM2Q5ZTQ4ZmMyYTQBOPp9: 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: ]] 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWU0NWRmYmYwMDMyNDZlZTU1ZjFjN2U1MzI1OTkzNmUeWl/q: 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.694 request: 00:35:20.694 { 00:35:20.694 "name": "nvme0", 00:35:20.694 "dhchap_key": "key2", 00:35:20.694 "dhchap_ctrlr_key": "ckey1", 00:35:20.694 "method": "bdev_nvme_set_keys", 00:35:20.694 "req_id": 1 00:35:20.694 } 00:35:20.694 Got JSON-RPC error response 00:35:20.694 response: 00:35:20.694 { 00:35:20.694 "code": -13, 00:35:20.694 "message": "Permission denied" 00:35:20.694 } 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:20.694 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:21.630 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.630 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:21.630 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.630 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.630 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.890 rmmod nvme_tcp 00:35:21.890 rmmod nvme_fabrics 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 513478 ']' 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 513478 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 513478 ']' 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 513478 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513478 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513478' 00:35:21.890 killing process with pid 513478 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 513478 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 513478 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:21.890 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:22.150 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:22.150 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.150 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.150 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.150 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.150 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:24.057 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.350 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:27.350 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:27.922 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:27.922 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8Th /tmp/spdk.key-null.TES /tmp/spdk.key-sha256.mwP /tmp/spdk.key-sha384.fHY /tmp/spdk.key-sha512.Omh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:27.922 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:31.213 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:31.213 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:31.213 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:31.213 00:35:31.213 real 0m55.621s 00:35:31.213 user 0m50.422s 00:35:31.213 sys 0m12.632s 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.213 ************************************ 00:35:31.213 END TEST nvmf_auth_host 00:35:31.213 ************************************ 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.213 ************************************ 00:35:31.213 START TEST nvmf_digest 00:35:31.213 ************************************ 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:31.213 * Looking for test storage... 00:35:31.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.213 --rc genhtml_branch_coverage=1 00:35:31.213 --rc genhtml_function_coverage=1 00:35:31.213 --rc genhtml_legend=1 00:35:31.213 --rc geninfo_all_blocks=1 00:35:31.213 --rc geninfo_unexecuted_blocks=1 00:35:31.213 00:35:31.213 ' 00:35:31.213 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:31.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.213 --rc genhtml_branch_coverage=1 00:35:31.213 --rc genhtml_function_coverage=1 00:35:31.213 --rc genhtml_legend=1 00:35:31.213 --rc geninfo_all_blocks=1 00:35:31.213 --rc geninfo_unexecuted_blocks=1 00:35:31.213 00:35:31.213 ' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:31.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.214 --rc genhtml_branch_coverage=1 00:35:31.214 --rc genhtml_function_coverage=1 00:35:31.214 --rc genhtml_legend=1 00:35:31.214 --rc geninfo_all_blocks=1 00:35:31.214 --rc geninfo_unexecuted_blocks=1 00:35:31.214 00:35:31.214 ' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:31.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.214 --rc genhtml_branch_coverage=1 00:35:31.214 --rc genhtml_function_coverage=1 00:35:31.214 --rc genhtml_legend=1 00:35:31.214 --rc geninfo_all_blocks=1 00:35:31.214 --rc geninfo_unexecuted_blocks=1 00:35:31.214 00:35:31.214 ' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.214 12:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:37.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:37.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:37.788 Found net devices under 0000:af:00.0: cvl_0_0 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:37.788 Found net devices under 0000:af:00.1: cvl_0_1 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.788 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:35:37.789 00:35:37.789 --- 10.0.0.2 ping statistics --- 00:35:37.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.789 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:35:37.789 00:35:37.789 --- 10.0.0.1 ping statistics --- 00:35:37.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.789 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.789 ************************************ 00:35:37.789 START TEST nvmf_digest_clean 00:35:37.789 ************************************ 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=527238 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 527238 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527238 ']' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:37.789 [2024-12-13 12:41:04.652818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:37.789 [2024-12-13 12:41:04.652858] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.789 [2024-12-13 12:41:04.728187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.789 [2024-12-13 12:41:04.750565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.789 [2024-12-13 12:41:04.750597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.789 [2024-12-13 12:41:04.750604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.789 [2024-12-13 12:41:04.750610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.789 [2024-12-13 12:41:04.750618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.789 [2024-12-13 12:41:04.751118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:37.789 null0 00:35:37.789 [2024-12-13 12:41:04.931461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.789 [2024-12-13 12:41:04.955662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527414 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527414 /var/tmp/bperf.sock 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527414 ']' 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:37.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.789 12:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:37.789 [2024-12-13 12:41:05.007114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:37.789 [2024-12-13 12:41:05.007155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527414 ] 00:35:37.789 [2024-12-13 12:41:05.078468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.789 [2024-12-13 12:41:05.101554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.789 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.789 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:37.789 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:37.789 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:37.789 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:37.789 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.790 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.048 nvme0n1 00:35:38.048 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:38.048 12:41:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:38.307 Running I/O for 2 seconds... 00:35:40.180 24458.00 IOPS, 95.54 MiB/s [2024-12-13T11:41:07.880Z] 24515.50 IOPS, 95.76 MiB/s 00:35:40.180 Latency(us) 00:35:40.180 [2024-12-13T11:41:07.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.180 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:40.180 nvme0n1 : 2.00 24538.50 95.85 0.00 0.00 5210.72 2621.44 15291.73 00:35:40.180 [2024-12-13T11:41:07.880Z] =================================================================================================================== 00:35:40.180 [2024-12-13T11:41:07.880Z] Total : 24538.50 95.85 0.00 0.00 5210.72 2621.44 15291.73 00:35:40.180 { 00:35:40.180 "results": [ 00:35:40.180 { 00:35:40.180 "job": "nvme0n1", 00:35:40.180 "core_mask": "0x2", 00:35:40.180 "workload": "randread", 00:35:40.180 "status": "finished", 00:35:40.180 "queue_depth": 128, 00:35:40.180 "io_size": 4096, 00:35:40.180 "runtime": 2.003342, 00:35:40.180 "iops": 24538.49617289509, 00:35:40.180 "mibps": 95.85350067537145, 00:35:40.180 "io_failed": 0, 00:35:40.180 "io_timeout": 0, 00:35:40.180 "avg_latency_us": 5210.719342076585, 00:35:40.180 "min_latency_us": 2621.44, 00:35:40.180 "max_latency_us": 15291.733333333334 00:35:40.180 } 00:35:40.180 ], 00:35:40.180 "core_count": 1 00:35:40.180 } 00:35:40.180 12:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:40.180 12:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:40.180 12:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:40.180 12:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:40.180 | select(.opcode=="crc32c") 00:35:40.180 | "\(.module_name) \(.executed)"' 00:35:40.180 12:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527414 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527414 ']' 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527414 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527414 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527414' 00:35:40.439 killing process with pid 527414 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527414 00:35:40.439 Received shutdown signal, test time was about 2.000000 seconds 00:35:40.439 00:35:40.439 Latency(us) 00:35:40.439 [2024-12-13T11:41:08.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.439 [2024-12-13T11:41:08.139Z] =================================================================================================================== 00:35:40.439 [2024-12-13T11:41:08.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.439 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527414 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527879 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527879 /var/tmp/bperf.sock 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527879 ']' 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.698 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:40.698 [2024-12-13 12:41:08.270878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:40.698 [2024-12-13 12:41:08.270926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527879 ] 00:35:40.698 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:40.698 Zero copy mechanism will not be used. 00:35:40.698 [2024-12-13 12:41:08.344506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.698 [2024-12-13 12:41:08.366618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.957 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.957 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:40.957 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:40.957 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:40.957 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:41.216 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:41.216 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:41.474 nvme0n1 00:35:41.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:41.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:41.474 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:41.474 Zero copy mechanism will not be used. 00:35:41.474 Running I/O for 2 seconds... 00:35:43.801 5760.00 IOPS, 720.00 MiB/s [2024-12-13T11:41:11.501Z] 5942.00 IOPS, 742.75 MiB/s 00:35:43.801 Latency(us) 00:35:43.801 [2024-12-13T11:41:11.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.801 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:43.801 nvme0n1 : 2.00 5940.52 742.56 0.00 0.00 2690.75 624.15 4868.39 00:35:43.801 [2024-12-13T11:41:11.501Z] =================================================================================================================== 00:35:43.801 [2024-12-13T11:41:11.501Z] Total : 5940.52 742.56 0.00 0.00 2690.75 624.15 4868.39 00:35:43.801 { 00:35:43.801 "results": [ 00:35:43.801 { 00:35:43.801 "job": "nvme0n1", 00:35:43.801 "core_mask": "0x2", 00:35:43.801 "workload": "randread", 00:35:43.801 "status": "finished", 00:35:43.801 "queue_depth": 16, 00:35:43.801 "io_size": 131072, 00:35:43.801 "runtime": 2.003192, 00:35:43.801 "iops": 5940.518931784872, 00:35:43.801 "mibps": 742.564866473109, 00:35:43.801 "io_failed": 0, 00:35:43.801 "io_timeout": 0, 00:35:43.801 "avg_latency_us": 2690.7512368147254, 00:35:43.801 "min_latency_us": 624.152380952381, 00:35:43.801 "max_latency_us": 4868.388571428572 00:35:43.801 } 00:35:43.801 ], 00:35:43.801 "core_count": 1 00:35:43.801 } 00:35:43.801 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:43.801 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:43.801 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:43.801 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:43.802 | select(.opcode=="crc32c") 00:35:43.802 | "\(.module_name) \(.executed)"' 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527879 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527879 ']' 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527879 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527879 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527879' 00:35:43.802 killing process with pid 527879 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527879 00:35:43.802 Received shutdown signal, test time was about 2.000000 seconds 00:35:43.802 00:35:43.802 Latency(us) 00:35:43.802 [2024-12-13T11:41:11.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.802 [2024-12-13T11:41:11.502Z] =================================================================================================================== 00:35:43.802 [2024-12-13T11:41:11.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:43.802 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527879 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528338 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528338 /var/tmp/bperf.sock 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528338 ']' 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:44.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:44.061 [2024-12-13 12:41:11.557533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:44.061 [2024-12-13 12:41:11.557578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528338 ] 00:35:44.061 [2024-12-13 12:41:11.628031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.061 [2024-12-13 12:41:11.647362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:44.061 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:44.320 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:44.320 12:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:44.580 nvme0n1 00:35:44.580 12:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:44.580 12:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:44.839 Running I/O for 2 seconds... 00:35:46.712 27262.00 IOPS, 106.49 MiB/s [2024-12-13T11:41:14.412Z] 27383.00 IOPS, 106.96 MiB/s 00:35:46.712 Latency(us) 00:35:46.712 [2024-12-13T11:41:14.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.712 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:46.712 nvme0n1 : 2.00 27384.71 106.97 0.00 0.00 4666.03 3510.86 10048.85 00:35:46.712 [2024-12-13T11:41:14.412Z] =================================================================================================================== 00:35:46.712 [2024-12-13T11:41:14.412Z] Total : 27384.71 106.97 0.00 0.00 4666.03 3510.86 10048.85 00:35:46.712 { 00:35:46.712 "results": [ 00:35:46.712 { 00:35:46.712 "job": "nvme0n1", 00:35:46.712 "core_mask": "0x2", 00:35:46.712 "workload": "randwrite", 00:35:46.712 "status": "finished", 00:35:46.712 "queue_depth": 128, 00:35:46.712 "io_size": 4096, 00:35:46.712 "runtime": 2.004549, 00:35:46.712 "iops": 27384.71346921427, 00:35:46.712 "mibps": 106.97153698911825, 00:35:46.712 "io_failed": 0, 00:35:46.712 "io_timeout": 0, 00:35:46.712 "avg_latency_us": 4666.028816333471, 00:35:46.712 "min_latency_us": 3510.8571428571427, 00:35:46.712 "max_latency_us": 10048.853333333333 00:35:46.712 } 00:35:46.712 ], 00:35:46.712 "core_count": 1 00:35:46.712 } 00:35:46.712 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:46.712 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:46.712 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:46.712 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:46.712 | select(.opcode=="crc32c") 00:35:46.712 | "\(.module_name) \(.executed)"' 00:35:46.712 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:46.972 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:46.972 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:46.972 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528338 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528338 ']' 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528338 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528338 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528338' 00:35:46.973 killing process with pid 528338 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528338 00:35:46.973 Received shutdown signal, test time was about 2.000000 seconds 00:35:46.973 00:35:46.973 Latency(us) 00:35:46.973 [2024-12-13T11:41:14.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.973 [2024-12-13T11:41:14.673Z] =================================================================================================================== 00:35:46.973 [2024-12-13T11:41:14.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.973 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528338 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528945 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528945 /var/tmp/bperf.sock 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528945 ']' 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.232 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.232 [2024-12-13 12:41:14.842508] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:47.233 [2024-12-13 12:41:14.842554] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528945 ] 00:35:47.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:47.233 Zero copy mechanism will not be used. 00:35:47.233 [2024-12-13 12:41:14.916623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.492 [2024-12-13 12:41:14.938984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.492 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.492 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:47.492 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:47.492 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:47.492 12:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.750 12:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:47.750 12:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:48.009 nvme0n1 00:35:48.009 12:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:48.009 12:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:48.269 Zero copy mechanism will not be used. 00:35:48.269 Running I/O for 2 seconds... 00:35:50.171 5972.00 IOPS, 746.50 MiB/s [2024-12-13T11:41:17.871Z] 6389.50 IOPS, 798.69 MiB/s 00:35:50.171 Latency(us) 00:35:50.171 [2024-12-13T11:41:17.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.171 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:50.171 nvme0n1 : 2.00 6387.50 798.44 0.00 0.00 2500.66 1888.06 11734.06 00:35:50.171 [2024-12-13T11:41:17.871Z] =================================================================================================================== 00:35:50.171 [2024-12-13T11:41:17.871Z] Total : 6387.50 798.44 0.00 0.00 2500.66 1888.06 11734.06 00:35:50.171 { 00:35:50.171 "results": [ 00:35:50.171 { 00:35:50.171 "job": "nvme0n1", 00:35:50.171 "core_mask": "0x2", 00:35:50.171 "workload": "randwrite", 00:35:50.171 "status": "finished", 00:35:50.171 "queue_depth": 16, 00:35:50.171 "io_size": 131072, 00:35:50.171 "runtime": 2.002974, 00:35:50.171 "iops": 6387.501784845934, 00:35:50.171 "mibps": 798.4377231057417, 00:35:50.171 "io_failed": 0, 00:35:50.171 "io_timeout": 0, 00:35:50.171 "avg_latency_us": 2500.6592697469796, 00:35:50.171 "min_latency_us": 1888.0609523809524, 00:35:50.171 "max_latency_us": 11734.064761904761 00:35:50.171 } 00:35:50.171 ], 00:35:50.171 "core_count": 1 00:35:50.171 } 00:35:50.171 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:50.171 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:50.171 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:50.171 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:50.171 | select(.opcode=="crc32c") 00:35:50.171 | "\(.module_name) \(.executed)"' 00:35:50.171 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528945 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528945 ']' 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528945 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.431 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528945 00:35:50.431 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:50.431 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:50.431 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528945' 00:35:50.431 killing process with pid 528945 00:35:50.431 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528945 00:35:50.431 Received shutdown signal, test time was about 2.000000 seconds 00:35:50.431 00:35:50.431 Latency(us) 00:35:50.431 [2024-12-13T11:41:18.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.431 [2024-12-13T11:41:18.131Z] =================================================================================================================== 00:35:50.431 [2024-12-13T11:41:18.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:50.431 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528945 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 527238 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527238 ']' 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527238 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527238 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527238' 00:35:50.691 killing process with pid 527238 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527238 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527238 00:35:50.691 00:35:50.691 real 0m13.788s 00:35:50.691 user 0m26.340s 00:35:50.691 sys 0m4.620s 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.691 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:50.691 ************************************ 00:35:50.691 END TEST nvmf_digest_clean 00:35:50.691 ************************************ 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.951 ************************************ 00:35:50.951 START TEST nvmf_digest_error 00:35:50.951 ************************************ 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=529495 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 529495 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529495 ']' 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.951 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.951 [2024-12-13 12:41:18.516056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:50.951 [2024-12-13 12:41:18.516098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.951 [2024-12-13 12:41:18.590080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.951 [2024-12-13 12:41:18.611511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.951 [2024-12-13 12:41:18.611545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.951 [2024-12-13 12:41:18.611552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.951 [2024-12-13 12:41:18.611558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.951 [2024-12-13 12:41:18.611563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.951 [2024-12-13 12:41:18.612038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.211 [2024-12-13 12:41:18.704531] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.211 null0 00:35:51.211 [2024-12-13 12:41:18.795760] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.211 [2024-12-13 12:41:18.819979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=529521 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 529521 /var/tmp/bperf.sock 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529521 ']' 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:51.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.211 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.211 [2024-12-13 12:41:18.874328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:51.211 [2024-12-13 12:41:18.874367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529521 ] 00:35:51.471 [2024-12-13 12:41:18.947476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.471 [2024-12-13 12:41:18.970030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.471 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.471 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:51.471 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:51.471 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:51.731 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:51.731 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.731 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.731 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.731 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:51.731 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.299 nvme0n1 00:35:52.299 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:52.299 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.299 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:52.299 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.300 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:52.300 12:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:52.300 Running I/O for 2 seconds... 00:35:52.300 [2024-12-13 12:41:19.821966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.821996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.822007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.833534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.833558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.833567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.842707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.842728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.842736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.854445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.854466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.854474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.865696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.865720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.865728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.874390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.874410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.874417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.886718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.886739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.886748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.899577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.899597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.899605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.910862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.910882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.910890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.920581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.920608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.929114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.929133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.929142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.940124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.940144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.940152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.953068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.953089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.953100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.964466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.964485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.964493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.972480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.972499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.972506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.983545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.983564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.983572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.300 [2024-12-13 12:41:19.994975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.300 [2024-12-13 12:41:19.994995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.300 [2024-12-13 12:41:19.995003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.003383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.003405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.003415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.014099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.014119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.014127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.023582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.023602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.023611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.035933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.035952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.035961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.049003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.049027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.049035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.057601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.057623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.057633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.069695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.069716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.081309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.081331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.081339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.089764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.089789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.089798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.100442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.100470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.100482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.108899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.108920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.108929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.120648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.120668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.120675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.132499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.132518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.132526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.141196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.141217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.141224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.153216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.153237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.153245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.165900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.165921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.165929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.173814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.173833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.173842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.185898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.185918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.185926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.197497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.197517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.197524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.209860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.209880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.209888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.221111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.221131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.221138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.229919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.229938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.229951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.240153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.240174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.240181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.561 [2024-12-13 12:41:20.252194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.561 [2024-12-13 12:41:20.252216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.561 [2024-12-13 12:41:20.252223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.260771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.260798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.260807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.271190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.271211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.271219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.280227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.280249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.280257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.289740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.289760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.289768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.300339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.300360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.300368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.308474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.308494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.308501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.320539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.320562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.320570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.330724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.330744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.330752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.339773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.339799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.339807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.349961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.349981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.349990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.361235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.361255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.361262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.369875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.369895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.369903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.381091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.381112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.391370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.391390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.391397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.399708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.399729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.399737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.410313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.410333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.410340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.419048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.419068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.419076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.428898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.428926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.428935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.438833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.438853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.438861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.447300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.447320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.447328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.457222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.457243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.457250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.465657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.465676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.822 [2024-12-13 12:41:20.465685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.822 [2024-12-13 12:41:20.476636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.822 [2024-12-13 12:41:20.476656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.823 [2024-12-13 12:41:20.476664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.823 [2024-12-13 12:41:20.485594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.823 [2024-12-13 12:41:20.485614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.823 [2024-12-13 12:41:20.485626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.823 [2024-12-13 12:41:20.494940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.823 [2024-12-13 12:41:20.494959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.823 [2024-12-13 12:41:20.494966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.823 [2024-12-13 12:41:20.506906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.823 [2024-12-13 12:41:20.506926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.823 [2024-12-13 12:41:20.506934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.823 [2024-12-13 12:41:20.518321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:52.823 [2024-12-13 12:41:20.518341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.823 [2024-12-13 12:41:20.518348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.526534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.526554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.526563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.539091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.539112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.539120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.551531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.551552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.559665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.559684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.559692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.571102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.571130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.583664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.583688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.583696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.595565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.595586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.595593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.606985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.607005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.607012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.616425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.616445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.616453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.627285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.627304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.627312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.636319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.636339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.636347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.648196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.648232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.648240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.659525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.659545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.659553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.668737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.668756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.083 [2024-12-13 12:41:20.668764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.083 [2024-12-13 12:41:20.680703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.083 [2024-12-13 12:41:20.680723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.680731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.688962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.688981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.701035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.701055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.701063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.712988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.713008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.713016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.720953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.720972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.720979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.732606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.732625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.732633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.742996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.743014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.743023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.751713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.751732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.751740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.764804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.764826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.764834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.084 [2024-12-13 12:41:20.772808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.084 [2024-12-13 12:41:20.772828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.084 [2024-12-13 12:41:20.772836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.784664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.784685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.784693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.797582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.797602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.797610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 24187.00 IOPS, 94.48 MiB/s [2024-12-13T11:41:21.044Z] [2024-12-13 12:41:20.811079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.811107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.819323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.819341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.819349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.830840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.830859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.830867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.843671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.843691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.843698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.855530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.344 [2024-12-13 12:41:20.855551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.344 [2024-12-13 12:41:20.855559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.344 [2024-12-13 12:41:20.865025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.865045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.865053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.876887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.876906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.876914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.888714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.888733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.888741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.897956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.897975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.897983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.909448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.909468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.909476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.921881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.921909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.934119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.934138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.934146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.942241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.942260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.942268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.954052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.954071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.954082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.965277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.965296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.965304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.977519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.977538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.977545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.986267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.986287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.986294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:20.998552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:20.998572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:20.998579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:21.009960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:21.009979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:21.009987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:21.018875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:21.018895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:21.018903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.345 [2024-12-13 12:41:21.030945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.345 [2024-12-13 12:41:21.030964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.345 [2024-12-13 12:41:21.030971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.605 [2024-12-13 12:41:21.043371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.605 [2024-12-13 12:41:21.043392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.605 [2024-12-13 12:41:21.043400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.051971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.051992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.052000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.063740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.063759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.063766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.074279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.074298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.074305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.086604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.086623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.086630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.094918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.094937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.094944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.106899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.106918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.106926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.118824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.118842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.118850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.126902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.126920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.126928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.138495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.138514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.138521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.148647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.148666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.148674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.157317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.157335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.157343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.168439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.168458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.168466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.177714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.177733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.177740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.185854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.185873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.185881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.195800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.195819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.195826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.203998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.204017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.204025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.215741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.215760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.215769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.228277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.228297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.228308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.239492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.239512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.239520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.250862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.250882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.250890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.259028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.259048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.259056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.268416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.268435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.268443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.280187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.280206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.280214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.288504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.288523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.288531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.606 [2024-12-13 12:41:21.301125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.606 [2024-12-13 12:41:21.301145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.606 [2024-12-13 12:41:21.301153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.312724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.312744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.312752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.321443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.321463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.321471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.334259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.334279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.334303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.346746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.346765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.346774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.356880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.356899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.356907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.366702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.366722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.366730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.375299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.375319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.375326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.387007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.387026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.387034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.395594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.395612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.395620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.405503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.405523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.405534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.417501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.417520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.417528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.426416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.426434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.426442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.438503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.438523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.438530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.447137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.447156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.447164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.458408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.458429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.458437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.469509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.469533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.469541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.480333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.480354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.480361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.488195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.488214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.488222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.500101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.500124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.500131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.511309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.511328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.511336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.519408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.519427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.519434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.529951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.529970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.529978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.540663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.540682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.540690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.552922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.552942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.552949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.867 [2024-12-13 12:41:21.561253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:53.867 [2024-12-13 12:41:21.561274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.867 [2024-12-13 12:41:21.561282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.127 [2024-12-13 12:41:21.572875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.572905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.572914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.581094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.581113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.581121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.592983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.593004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.593012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.603396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.603415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.603423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.614449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.614469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.614477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.627040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.627060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.627067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.635470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.635489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.635497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.646908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.646927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.646935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.655101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.655120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.665906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.665925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.665933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.674590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.674609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.674620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.686259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.686279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.686287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.698675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.698697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.698705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.709508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.709529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.709537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.718145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.718165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.718173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.727677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.727698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.727707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.737110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.737130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.737137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.747566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.747586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.747594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.755545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.755565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.755573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.765814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.765837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.765845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.776319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.776348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.784815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.784835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.784842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.795606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.795625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 [2024-12-13 12:41:21.805811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.805831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.805839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 24293.00 IOPS, 94.89 MiB/s [2024-12-13T11:41:21.828Z] [2024-12-13 12:41:21.813990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746990) 00:35:54.128 [2024-12-13 12:41:21.814010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.128 [2024-12-13 12:41:21.814018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.128 00:35:54.128 Latency(us) 00:35:54.128 [2024-12-13T11:41:21.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.128 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:54.128 nvme0n1 : 2.00 24308.84 94.96 0.00 0.00 5260.42 2434.19 18100.42 00:35:54.128 [2024-12-13T11:41:21.828Z] =================================================================================================================== 00:35:54.128 [2024-12-13T11:41:21.828Z] Total : 24308.84 94.96 0.00 0.00 5260.42 2434.19 18100.42 00:35:54.128 { 00:35:54.128 "results": [ 00:35:54.128 { 00:35:54.128 "job": "nvme0n1", 00:35:54.128 "core_mask": "0x2", 00:35:54.128 "workload": "randread", 00:35:54.128 "status": "finished", 00:35:54.129 "queue_depth": 128, 00:35:54.129 "io_size": 4096, 00:35:54.129 "runtime": 2.003962, 00:35:54.129 "iops": 24308.844179680054, 00:35:54.129 "mibps": 94.95642257687521, 00:35:54.129 "io_failed": 0, 00:35:54.129 "io_timeout": 0, 00:35:54.129 "avg_latency_us": 5260.424275880407, 00:35:54.129 "min_latency_us": 2434.194285714286, 00:35:54.129 "max_latency_us": 18100.41904761905 00:35:54.129 } 00:35:54.129 ], 00:35:54.129 "core_count": 1 00:35:54.129 } 00:35:54.389 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:54.389 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:54.389 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:54.389 | .driver_specific 00:35:54.389 | .nvme_error 00:35:54.389 | .status_code 00:35:54.389 | .command_transient_transport_error' 00:35:54.389 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 529521 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529521 ']' 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529521 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529521 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529521' 00:35:54.389 killing process with pid 529521 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529521 00:35:54.389 Received shutdown signal, test time was about 2.000000 seconds 00:35:54.389 00:35:54.389 Latency(us) 00:35:54.389 [2024-12-13T11:41:22.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.389 [2024-12-13T11:41:22.089Z] =================================================================================================================== 00:35:54.389 [2024-12-13T11:41:22.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.389 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529521 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530184 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530184 /var/tmp/bperf.sock 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530184 ']' 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.648 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.648 [2024-12-13 12:41:22.282093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:54.648 [2024-12-13 12:41:22.282137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530184 ] 00:35:54.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:54.648 Zero copy mechanism will not be used. 00:35:54.908 [2024-12-13 12:41:22.357540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.908 [2024-12-13 12:41:22.379943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.908 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.908 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:54.908 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:54.908 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:55.167 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:55.167 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.167 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:55.167 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.167 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.167 12:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.426 nvme0n1 00:35:55.426 12:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:55.426 12:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.426 12:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:55.426 12:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.426 12:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:55.426 12:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:55.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:55.687 Zero copy mechanism will not be used. 00:35:55.687 Running I/O for 2 seconds... 00:35:55.687 [2024-12-13 12:41:23.146852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.146885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.146896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.153504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.153530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.160345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.160369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.160378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.167403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.167427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.167435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.173041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.173064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.173072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.176595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.176616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.176625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.181759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.181795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.187969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.187991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.188000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.194617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.194642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.194651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.201883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.201905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.201913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.208533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.208565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.208573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.216218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.216240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.216249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.222941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.222963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.222971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.230663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.230685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.230694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.238895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.238917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.238925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.246499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.246521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.246529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.254231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.254253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.254261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.261936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.261958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.261966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.269958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.269980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.269988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.277316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.277338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.277346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.285018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.285040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.285048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.292242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.292265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.292273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.300482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.300504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.300512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.308251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.308273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.308281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.316313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.316335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.316343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.687 [2024-12-13 12:41:23.322836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.687 [2024-12-13 12:41:23.322858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.687 [2024-12-13 12:41:23.322866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.330066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.330088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.330096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.336246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.336268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.336280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.342503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.342526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.342534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.348236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.348258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.348267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.353664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.353695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.359069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.359090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.359098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.364506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.364528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.364536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.369849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.369869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.369877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.375732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.375753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.375762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.688 [2024-12-13 12:41:23.381566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.688 [2024-12-13 12:41:23.381589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.688 [2024-12-13 12:41:23.381597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.387140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.387174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.387182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.392667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.392688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.392696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.398038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.398059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.398068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.403466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.403487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.403495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.409071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.409092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.409101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.415414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.415436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.415445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.422986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.423008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.423016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.430181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.430203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.430211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.437943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.437970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.437978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.445568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.445591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.445600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.452214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.452235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.452243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.455482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.455503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.455511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.461671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.461692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.461700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.467727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.467747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.467756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.473018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.473038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.473046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.478054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.478074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.478083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.482997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.483018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.483026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.488136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.488156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.488168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.493529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.493549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.493557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.498724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.498751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.503866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.503887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.503895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.509287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.509308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.509316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.514701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.514722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.514730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.520133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.520155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.520163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.525483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.525504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.525511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.530897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.530917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.950 [2024-12-13 12:41:23.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.950 [2024-12-13 12:41:23.536423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.950 [2024-12-13 12:41:23.536444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.536451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.541693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.541721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.546870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.546890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.546898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.552090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.552111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.552119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.557306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.557327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.557335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.562544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.562564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.562571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.567829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.567849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.567857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.573144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.573164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.573172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.578541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.578561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.578572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.583831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.583850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.583858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.589070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.589090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.589098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.594457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.594477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.594484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.599820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.599840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.599847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.605068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.605087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.610257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.610277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.610284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.615397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.615417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.615425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.620638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.620658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.620666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.626063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.626087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.626095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.631192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.631212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.631220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.636517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.636538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.636546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.641931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.641950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.641958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:55.951 [2024-12-13 12:41:23.647241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:55.951 [2024-12-13 12:41:23.647261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.951 [2024-12-13 12:41:23.647269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.212 [2024-12-13 12:41:23.652690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.212 [2024-12-13 12:41:23.652711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.212 [2024-12-13 12:41:23.652719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.212 [2024-12-13 12:41:23.658048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.212 [2024-12-13 12:41:23.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.212 [2024-12-13 12:41:23.658078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.212 [2024-12-13 12:41:23.663320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.212 [2024-12-13 12:41:23.663340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.212 [2024-12-13 12:41:23.663348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.212 [2024-12-13 12:41:23.668551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.212 [2024-12-13 12:41:23.668570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.212 [2024-12-13 12:41:23.668578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.212 [2024-12-13 12:41:23.673981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.212 [2024-12-13 12:41:23.674001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.212 [2024-12-13 12:41:23.674009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.212 [2024-12-13 12:41:23.679397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.679418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.679426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.684801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.684820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.684828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.690096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.690116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.690124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.695475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.695495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.695503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.700955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.700975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.700983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.706206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.706227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.706235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.711619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.711638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.711645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.716830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.716850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.716862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.722029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.722049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.722057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.727310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.727329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.727337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.732668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.732688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.732695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.738330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.738351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.738359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.743666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.743686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.743694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.748923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.748943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.748950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.754620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.754640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.760695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.760715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.766124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.766148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.766156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.771636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.771657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.776980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.777000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.777008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.781894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.781914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.781921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.787432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.787452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.787460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.792699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.792720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.792728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.797900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.797920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.797928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.803190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.803210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.803217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.808452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.808481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.813828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.813848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.813856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.819242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.819261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.819269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.824626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.824646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.824654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.829989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.830009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.830016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.835241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.835261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.835268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.839990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.840010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.840019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.843909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.843928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.843936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.847025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.847047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.847055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.852154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.213 [2024-12-13 12:41:23.852173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.213 [2024-12-13 12:41:23.852184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.213 [2024-12-13 12:41:23.858932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.858953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.858961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.866311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.866332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.866340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.874160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.874180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.874188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.882636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.882656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.882664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.890446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.890467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.890476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.896191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.896212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.896220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.901933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.901954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.901961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.214 [2024-12-13 12:41:23.907338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.214 [2024-12-13 12:41:23.907359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.214 [2024-12-13 12:41:23.907368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.912795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.912816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.912824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.918395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.918415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.918424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.923594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.923615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.923625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.928629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.928652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.928662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.933873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.933893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.933902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.938979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.938999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.939006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.944037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.944057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.944065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.949213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.949232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.949240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.954407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.954427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.954439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.959436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.959455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.959463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.964506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.964526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.964534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.969670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.969690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.969698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.974637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.974657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.974665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.979698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.979718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.979726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.984689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.984709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.984717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.989689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.989708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.989717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.994667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.994687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.994694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:23.999824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:23.999849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:23.999857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.005107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.005127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.005134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.010335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.010356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.010364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.015503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.015524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.015532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.020640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.020661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.020668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.025904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.025924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.025932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.031131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.031151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.031159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.036319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.036340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.036347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.041487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.041507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.041515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.046604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.046624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.046631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.051840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.051860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.051868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.057124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.057145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.057152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.062292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.062312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.062320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.067417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.067437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.067445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.072600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.072621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.072629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.077759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.077778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.077792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.082979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.082999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.083007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.088196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.088216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.088231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.093447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.093467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.093475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.099437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.099458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.099467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.104827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.104846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.104854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.110095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.110115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.110122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.115306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.115326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.115333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.120506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.120526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.120534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.125605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.125625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.125633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.130775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.130803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.130811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.135987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.136019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 5439.00 IOPS, 679.88 MiB/s [2024-12-13T11:41:24.174Z] [2024-12-13 12:41:24.141676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.141697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.141704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.146863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.146883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.146893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.152036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.152056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.152064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.157190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.157210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.157217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.162382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.162402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.162410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.474 [2024-12-13 12:41:24.167599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.474 [2024-12-13 12:41:24.167635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.474 [2024-12-13 12:41:24.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.734 [2024-12-13 12:41:24.172838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.734 [2024-12-13 12:41:24.172859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.734 [2024-12-13 12:41:24.172868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.734 [2024-12-13 12:41:24.178037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.734 [2024-12-13 12:41:24.178058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.734 [2024-12-13 12:41:24.178070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.734 [2024-12-13 12:41:24.183315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.734 [2024-12-13 12:41:24.183340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.734 [2024-12-13 12:41:24.183348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.734 [2024-12-13 12:41:24.188175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.734 [2024-12-13 12:41:24.188195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.734 [2024-12-13 12:41:24.188202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.734 [2024-12-13 12:41:24.193316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.734 [2024-12-13 12:41:24.193335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.734 [2024-12-13 12:41:24.193342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.734 [2024-12-13 12:41:24.198510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.198530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.198538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.203711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.203731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.203739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.208876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.208896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.208903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.214042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.214062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.214070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.219202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.219224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.219232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.224426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.224450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.224458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.229628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.229648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.229656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.234861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.234881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.234889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.240048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.240068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.240075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.245253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.245273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.245281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.250465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.250485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.250493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.255614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.255634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.255642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.260900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.260921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.260929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.266181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.266200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.266208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.271338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.271357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.271365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.276489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.276508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.276516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.281679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.281699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.281707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.286838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.286857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.286865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.292000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.292021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.292029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.297181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.297202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.297209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.302305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.302325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.302333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.307426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.307446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.307454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.312592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.312612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.312623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.317769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.317795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.317803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.323002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.323022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.323030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.328179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.328199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.328207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.333430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.333458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.338612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.338632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.343865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.343892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.349089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.349108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.349115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.354316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.354337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.354344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.359575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.359599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.359607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.365036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.365056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.365064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.370835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.370856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.370864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.376047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.376068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.376076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.381297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.381317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.381325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.386504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.386525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.386532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.391670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.391690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.391698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.396197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.396216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.396224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.399216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.399235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.399246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.403961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.403981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.403989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.408973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.408993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.409001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.413965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.413985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.413994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.419052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.419071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.419079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.424344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.424365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.424373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.735 [2024-12-13 12:41:24.429185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.735 [2024-12-13 12:41:24.429206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.735 [2024-12-13 12:41:24.429215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.434361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.434384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.434392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.439407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.439427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.439436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.444485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.444509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.444517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.449554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.449574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.449582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.454674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.454694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.454701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.459757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.459790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.462690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.462710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.462719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.467712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.467733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.467741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.472673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.472693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.472701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.477610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.477631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.477639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.482757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.482778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.482792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.487579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.487599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.487607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.996 [2024-12-13 12:41:24.493477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.996 [2024-12-13 12:41:24.493498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.996 [2024-12-13 12:41:24.493506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.498827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.498847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.498856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.504204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.504224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.504232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.509345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.509365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.509373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.514846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.514866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.514874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.520945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.520967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.520975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.528609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.528631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.528638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.535935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.535956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.535968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.542489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.542511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.542519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.548434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.548457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.548466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.554019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.554041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.554049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.559363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.559385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.559393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.564590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.564613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.564620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.569871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.569891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.569898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.575114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.575135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.575142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.580618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.580639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.580647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.585903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.585928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.585936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.591222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.591244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.591251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.596669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.596692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.596700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.601987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.602006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.602014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.607159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.607181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.607189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.612391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.612411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.612419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.617771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.617797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.617806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.622954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.622974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.622981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.628163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.628183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.628191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.633378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.633398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.638597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.638617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.638625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.643799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.643819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.997 [2024-12-13 12:41:24.643827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.997 [2024-12-13 12:41:24.648982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.997 [2024-12-13 12:41:24.649003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.649011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.654194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.654215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.654223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.659452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.659473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.659481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.664731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.664753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.664761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.669932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.669953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.669960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.675190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.675210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.675222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.680476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.680496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.680504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.685713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.685733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.685741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:56.998 [2024-12-13 12:41:24.690929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:56.998 [2024-12-13 12:41:24.690950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.998 [2024-12-13 12:41:24.690958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.696127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.696149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.696157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.698967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.698986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.698995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.703948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.703968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.703976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.709077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.709098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.709105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.714086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.714107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.714114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.719128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.719147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.719154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.724120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.724141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.724149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.729344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.729364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.729372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.734566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.734587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.734594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.739729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.739749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.739757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.744939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.744958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.744966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.750100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.750120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.750127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.755360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.755380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.755388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.760593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.760613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.760626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.765730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.765750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.765758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.771079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.771099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.771107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.776164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.776184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.776192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.781297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.781318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.781325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.786530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.786551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.786559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.259 [2024-12-13 12:41:24.791727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.259 [2024-12-13 12:41:24.791747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.259 [2024-12-13 12:41:24.791755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.796921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.796942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.796950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.802160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.802180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.802188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.807392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.807416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.807424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.812576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.812596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.812604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.817686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.817706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.817714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.822932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.822961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.828155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.828176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.828184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.833401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.833420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.833428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.838345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.838365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.838373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.843534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.843555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.843563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.848794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.848814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.848821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.854048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.854069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.854077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.859317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.859338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.859346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.864416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.864435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.864443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.869516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.869536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.869544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.874722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.874742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.874749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.879947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.879967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.879977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.885224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.885245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.885252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.890368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.890387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.890395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.895583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.895603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.895614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.900797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.900817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.900824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.906007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.906028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.906036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.911183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.260 [2024-12-13 12:41:24.911204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.260 [2024-12-13 12:41:24.911211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.260 [2024-12-13 12:41:24.916408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.916429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.916437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.921612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.921633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.921640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.926825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.926847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.926855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.931992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.932013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.932021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.937160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.937181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.937189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.942727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.942752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.942759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.948391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.948411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.948420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.261 [2024-12-13 12:41:24.953664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.261 [2024-12-13 12:41:24.953685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.261 [2024-12-13 12:41:24.953692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.958926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.527 [2024-12-13 12:41:24.958948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.527 [2024-12-13 12:41:24.958956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.964205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.527 [2024-12-13 12:41:24.964225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.527 [2024-12-13 12:41:24.964233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.969377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.527 [2024-12-13 12:41:24.969398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.527 [2024-12-13 12:41:24.969406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.974551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.527 [2024-12-13 12:41:24.974572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.527 [2024-12-13 12:41:24.974580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.979805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.527 [2024-12-13 12:41:24.979827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.527 [2024-12-13 12:41:24.979834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.985016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.527 [2024-12-13 12:41:24.985036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.527 [2024-12-13 12:41:24.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.527 [2024-12-13 12:41:24.990166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:24.990187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:24.990195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:24.995301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:24.995323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:24.995331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.000433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.000454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.000462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.005637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.005657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.005665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.010832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.010853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.010860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.016048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.016068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.016078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.021187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.021207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.021215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.026303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.026324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.026332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.031423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.031446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.031454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.036626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.036646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.036654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.041796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.041817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.041824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.046965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.046985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.046993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.052140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.052161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.052168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.057358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.057378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.057386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.062491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.062511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.062518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.528 [2024-12-13 12:41:25.067688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.528 [2024-12-13 12:41:25.067708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.528 [2024-12-13 12:41:25.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.072830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.072850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.072857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.078040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.078060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.078067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.083305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.083325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.083333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.088367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.088388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.088395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.093508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.093527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.093535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.098662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.098682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.098690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.103814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.103834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.103842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.109056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.109076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.109084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.114223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.114243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.114250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.119419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.119439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.119450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.124537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.124557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.124567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.129658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.129678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.129686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.134802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.134822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.134830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.529 [2024-12-13 12:41:25.139983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13f2c50) 00:35:57.529 [2024-12-13 12:41:25.140003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.529 [2024-12-13 12:41:25.140010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:57.529 5703.50 IOPS, 712.94 MiB/s 00:35:57.529 Latency(us) 00:35:57.529 [2024-12-13T11:41:25.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.529 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:57.529 nvme0n1 : 2.00 5704.04 713.01 0.00 0.00 2802.41 604.65 8363.64 00:35:57.529 [2024-12-13T11:41:25.229Z] =================================================================================================================== 00:35:57.529 [2024-12-13T11:41:25.229Z] Total : 5704.04 713.01 0.00 0.00 2802.41 604.65 8363.64 00:35:57.529 { 00:35:57.529 "results": [ 00:35:57.529 { 00:35:57.529 "job": "nvme0n1", 00:35:57.529 "core_mask": "0x2", 00:35:57.529 "workload": "randread", 00:35:57.529 "status": "finished", 00:35:57.529 "queue_depth": 16, 00:35:57.529 "io_size": 131072, 00:35:57.529 "runtime": 2.002615, 00:35:57.529 "iops": 5704.041965130592, 00:35:57.529 "mibps": 713.005245641324, 00:35:57.529 "io_failed": 0, 00:35:57.529 "io_timeout": 0, 00:35:57.529 "avg_latency_us": 2802.412533443387, 00:35:57.529 "min_latency_us": 604.6476190476191, 00:35:57.529 "max_latency_us": 8363.641904761906 00:35:57.529 } 00:35:57.529 ], 00:35:57.529 "core_count": 1 00:35:57.529 } 00:35:57.529 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:57.529 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:57.529 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:57.530 | .driver_specific 00:35:57.530 | .nvme_error 00:35:57.530 | .status_code 00:35:57.530 | .command_transient_transport_error' 00:35:57.530 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530184 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530184 ']' 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530184 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530184 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530184' 00:35:57.792 killing process with pid 530184 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530184 00:35:57.792 Received shutdown signal, test time was about 2.000000 seconds 00:35:57.792 00:35:57.792 Latency(us) 00:35:57.792 [2024-12-13T11:41:25.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.792 [2024-12-13T11:41:25.492Z] =================================================================================================================== 00:35:57.792 [2024-12-13T11:41:25.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.792 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530184 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530646 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530646 /var/tmp/bperf.sock 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530646 ']' 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:58.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.052 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:58.052 [2024-12-13 12:41:25.614887] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:58.052 [2024-12-13 12:41:25.614937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530646 ] 00:35:58.052 [2024-12-13 12:41:25.688771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.052 [2024-12-13 12:41:25.708835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.312 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.882 nvme0n1 00:35:58.882 12:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:58.882 12:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.882 12:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:58.882 12:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.882 12:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:58.882 12:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:58.882 Running I/O for 2 seconds... 00:35:58.882 [2024-12-13 12:41:26.417195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee01f8 00:35:58.882 [2024-12-13 12:41:26.418167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.418198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.425923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef46d0 00:35:58.882 [2024-12-13 12:41:26.426863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.426886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.436004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef96f8 00:35:58.882 [2024-12-13 12:41:26.436994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.437014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.444353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef5be8 00:35:58.882 [2024-12-13 12:41:26.445421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.445444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.453444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eee5c8 00:35:58.882 [2024-12-13 12:41:26.454522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.454541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.462357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efe720 00:35:58.882 [2024-12-13 12:41:26.462963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.462982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.473083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeaab8 00:35:58.882 [2024-12-13 12:41:26.474649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.474667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.479376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efe2e8 00:35:58.882 [2024-12-13 12:41:26.480111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.480130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.490620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee12d8 00:35:58.882 [2024-12-13 12:41:26.491961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.491979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.499660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0bc0 00:35:58.882 [2024-12-13 12:41:26.501094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.501112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.508436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb048 00:35:58.882 [2024-12-13 12:41:26.509880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.509898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.515176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eec840 00:35:58.882 [2024-12-13 12:41:26.515891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.515909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.524659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6738 00:35:58.882 [2024-12-13 12:41:26.525460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.525479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.535648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0ff8 00:35:58.882 [2024-12-13 12:41:26.537056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.537074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.542138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee49b0 00:35:58.882 [2024-12-13 12:41:26.542815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.542833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.551151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef8618 00:35:58.882 [2024-12-13 12:41:26.551835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.551853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.560447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb480 00:35:58.882 [2024-12-13 12:41:26.561275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.561293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.570367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee8088 00:35:58.882 [2024-12-13 12:41:26.571252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.571270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:58.882 [2024-12-13 12:41:26.578698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7100 00:35:58.882 [2024-12-13 12:41:26.579669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-13 12:41:26.579687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.587974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efc998 00:35:59.143 [2024-12-13 12:41:26.588459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.588478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.598399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed0b0 00:35:59.143 [2024-12-13 12:41:26.599575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.599594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.607427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed920 00:35:59.143 [2024-12-13 12:41:26.608708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.608726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.616686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef1868 00:35:59.143 [2024-12-13 12:41:26.617988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.618006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.625122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb480 00:35:59.143 [2024-12-13 12:41:26.626197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.626215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.634228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee38d0 00:35:59.143 [2024-12-13 12:41:26.635232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.635250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.643074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efda78 00:35:59.143 [2024-12-13 12:41:26.643846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.643865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.651220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef96f8 00:35:59.143 [2024-12-13 12:41:26.651972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.651990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.661800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef96f8 00:35:59.143 [2024-12-13 12:41:26.663005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.663026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.671128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee5a90 00:35:59.143 [2024-12-13 12:41:26.672486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.672504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.678371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef5be8 00:35:59.143 [2024-12-13 12:41:26.679303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.679326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.687339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee0ea0 00:35:59.143 [2024-12-13 12:41:26.687950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.687968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.695579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef2510 00:35:59.143 [2024-12-13 12:41:26.696179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.696197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.706128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef2510 00:35:59.143 [2024-12-13 12:41:26.707210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.707228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.715220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efcdd0 00:35:59.143 [2024-12-13 12:41:26.716422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.716440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.724145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef35f0 00:35:59.143 [2024-12-13 12:41:26.724887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.724907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.732577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6458 00:35:59.143 [2024-12-13 12:41:26.733905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.733922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.740178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef1430 00:35:59.143 [2024-12-13 12:41:26.740882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.740900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.750985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeaef0 00:35:59.143 [2024-12-13 12:41:26.752075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.752092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.759418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef1868 00:35:59.143 [2024-12-13 12:41:26.760264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.760282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.768899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef81e0 00:35:59.143 [2024-12-13 12:41:26.769908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.769926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.778367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0ff8 00:35:59.143 [2024-12-13 12:41:26.779476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-13 12:41:26.779494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-13 12:41:26.786819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee3498 00:35:59.143 [2024-12-13 12:41:26.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.787897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:59.144 [2024-12-13 12:41:26.795640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee2c28 00:35:59.144 [2024-12-13 12:41:26.796825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.796843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:59.144 [2024-12-13 12:41:26.804647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016edfdc0 00:35:59.144 [2024-12-13 12:41:26.805375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.805394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:59.144 [2024-12-13 12:41:26.813699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6020 00:35:59.144 [2024-12-13 12:41:26.814634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.814652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:59.144 [2024-12-13 12:41:26.822153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed0b0 00:35:59.144 [2024-12-13 12:41:26.823084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.823103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:59.144 [2024-12-13 12:41:26.831428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef2d80 00:35:59.144 [2024-12-13 12:41:26.832166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.832185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:59.144 [2024-12-13 12:41:26.840558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeff18 00:35:59.144 [2024-12-13 12:41:26.841536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-13 12:41:26.841554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.849965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeaef0 00:35:59.404 [2024-12-13 12:41:26.851260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.851279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.858992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0788 00:35:59.404 [2024-12-13 12:41:26.860309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.860327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.867134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eecc78 00:35:59.404 [2024-12-13 12:41:26.868199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.868218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.876022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6fa8 00:35:59.404 [2024-12-13 12:41:26.876938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.876956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.884406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0788 00:35:59.404 [2024-12-13 12:41:26.885360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.885377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.893438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efda78 00:35:59.404 [2024-12-13 12:41:26.893912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.893930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.902486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6cc8 00:35:59.404 [2024-12-13 12:41:26.903224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.903242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.911659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eec840 00:35:59.404 [2024-12-13 12:41:26.912522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.912543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.921209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0bc0 00:35:59.404 [2024-12-13 12:41:26.922335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.922353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.930397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee12d8 00:35:59.404 [2024-12-13 12:41:26.931616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.931634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.939586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efa7d8 00:35:59.404 [2024-12-13 12:41:26.940329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.940348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.949821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eea248 00:35:59.404 [2024-12-13 12:41:26.951368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.951386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.956127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efe720 00:35:59.404 [2024-12-13 12:41:26.956872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.956889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.965244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efc128 00:35:59.404 [2024-12-13 12:41:26.965874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.965892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.974286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeaef0 00:35:59.404 [2024-12-13 12:41:26.975039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.983567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efa7d8 00:35:59.404 [2024-12-13 12:41:26.984502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.984521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:26.992869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eff3c8 00:35:59.404 [2024-12-13 12:41:26.993945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:26.993964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:27.001086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0350 00:35:59.404 [2024-12-13 12:41:27.001916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:27.001934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-13 12:41:27.011184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eebfd0 00:35:59.404 [2024-12-13 12:41:27.012242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-13 12:41:27.012259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.018326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6cc8 00:35:59.405 [2024-12-13 12:41:27.018935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.018954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.027547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef20d8 00:35:59.405 [2024-12-13 12:41:27.028249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.028267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.036736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6738 00:35:59.405 [2024-12-13 12:41:27.037465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.037483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.044941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0bc0 00:35:59.405 [2024-12-13 12:41:27.045709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.045726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.054237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ede8a8 00:35:59.405 [2024-12-13 12:41:27.055194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.055212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.064272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed920 00:35:59.405 [2024-12-13 12:41:27.065207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.065225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.072561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016edf550 00:35:59.405 [2024-12-13 12:41:27.073481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.073498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.081594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef9b30 00:35:59.405 [2024-12-13 12:41:27.082658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.082677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.090292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efa7d8 00:35:59.405 [2024-12-13 12:41:27.091341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.091359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:59.405 [2024-12-13 12:41:27.099642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef3a28 00:35:59.405 [2024-12-13 12:41:27.100821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-13 12:41:27.100839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.108938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee7c50 00:35:59.665 [2024-12-13 12:41:27.109640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.109658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.117937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef20d8 00:35:59.665 [2024-12-13 12:41:27.118904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.118922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.126251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee99d8 00:35:59.665 [2024-12-13 12:41:27.127560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.135287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee3d08 00:35:59.665 [2024-12-13 12:41:27.136129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.136147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.143798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee49b0 00:35:59.665 [2024-12-13 12:41:27.144476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.144497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.152044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeee38 00:35:59.665 [2024-12-13 12:41:27.152798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.162000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efa7d8 00:35:59.665 [2024-12-13 12:41:27.162933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.162951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.171099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eea248 00:35:59.665 [2024-12-13 12:41:27.171760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.171778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.179969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee9168 00:35:59.665 [2024-12-13 12:41:27.180966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.180984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.188605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efd640 00:35:59.665 [2024-12-13 12:41:27.189389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.189407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.196964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6458 00:35:59.665 [2024-12-13 12:41:27.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.197745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.207822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee0ea0 00:35:59.665 [2024-12-13 12:41:27.208987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.209005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:59.665 [2024-12-13 12:41:27.215122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeb760 00:35:59.665 [2024-12-13 12:41:27.215663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-13 12:41:27.215681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.224463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee7c50 00:35:59.666 [2024-12-13 12:41:27.225351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.225369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.233778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef81e0 00:35:59.666 [2024-12-13 12:41:27.234740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.234758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.242887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef2d80 00:35:59.666 [2024-12-13 12:41:27.243468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.243486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.252156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee3d08 00:35:59.666 [2024-12-13 12:41:27.252827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.252845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.261182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef20d8 00:35:59.666 [2024-12-13 12:41:27.262147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.262164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.270993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7100 00:35:59.666 [2024-12-13 12:41:27.272372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.272389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.277505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ede8a8 00:35:59.666 [2024-12-13 12:41:27.278282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.278300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.286791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeee38 00:35:59.666 [2024-12-13 12:41:27.287603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.287621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.295634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee3060 00:35:59.666 [2024-12-13 12:41:27.296521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.296538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.304700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6890 00:35:59.666 [2024-12-13 12:41:27.305127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.305145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.315402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efac10 00:35:59.666 [2024-12-13 12:41:27.316727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.324729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efbcf0 00:35:59.666 [2024-12-13 12:41:27.326286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.326303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.331366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed0b0 00:35:59.666 [2024-12-13 12:41:27.332180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.332199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.342380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efef90 00:35:59.666 [2024-12-13 12:41:27.343690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.343708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.350779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eea680 00:35:59.666 [2024-12-13 12:41:27.351839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.351857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-13 12:41:27.359815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef92c0 00:35:59.666 [2024-12-13 12:41:27.360907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-13 12:41:27.360925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.370926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0ff8 00:35:59.926 [2024-12-13 12:41:27.372483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.372501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.377230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee27f0 00:35:59.926 [2024-12-13 12:41:27.377859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.377881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.386766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef46d0 00:35:59.926 [2024-12-13 12:41:27.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.387698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.397873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee12d8 00:35:59.926 [2024-12-13 12:41:27.399372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.399389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.406964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed920 00:35:59.926 [2024-12-13 12:41:27.408343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.408360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:59.926 28390.00 IOPS, 110.90 MiB/s [2024-12-13T11:41:27.626Z] [2024-12-13 12:41:27.414376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef1868 00:35:59.926 [2024-12-13 12:41:27.415267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.415285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.423755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6890 00:35:59.926 [2024-12-13 12:41:27.424692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.424711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.432772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0bc0 00:35:59.926 [2024-12-13 12:41:27.433455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.433473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.441108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb480 00:35:59.926 [2024-12-13 12:41:27.441776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.441807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.450118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0350 00:35:59.926 [2024-12-13 12:41:27.450776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.450799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.460775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0350 00:35:59.926 [2024-12-13 12:41:27.462022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.462040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.470329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef46d0 00:35:59.926 [2024-12-13 12:41:27.471687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.471705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.478518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0bc0 00:35:59.926 [2024-12-13 12:41:27.479902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-13 12:41:27.479931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:59.926 [2024-12-13 12:41:27.488414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6300 00:35:59.927 [2024-12-13 12:41:27.489526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.489544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.495595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee4de8 00:35:59.927 [2024-12-13 12:41:27.496275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.496293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.504426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eebfd0 00:35:59.927 [2024-12-13 12:41:27.505100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.505118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.513423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7da8 00:35:59.927 [2024-12-13 12:41:27.514072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.514091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.521525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016edf118 00:35:59.927 [2024-12-13 12:41:27.522378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.522395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.530997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee99d8 00:35:59.927 [2024-12-13 12:41:27.531789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.531807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.540291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee27f0 00:35:59.927 [2024-12-13 12:41:27.541276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.541294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.549713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee95a0 00:35:59.927 [2024-12-13 12:41:27.550741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.558897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeaab8 00:35:59.927 [2024-12-13 12:41:27.559893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.559911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.567547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6890 00:35:59.927 [2024-12-13 12:41:27.568283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.568301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.575707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef2d80 00:35:59.927 [2024-12-13 12:41:27.576475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.585019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eddc00 00:35:59.927 [2024-12-13 12:41:27.585902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.585921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.594374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee8d30 00:35:59.927 [2024-12-13 12:41:27.595366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.595384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.603671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7538 00:35:59.927 [2024-12-13 12:41:27.604844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.604863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.611598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efeb58 00:35:59.927 [2024-12-13 12:41:27.612242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.612264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:59.927 [2024-12-13 12:41:27.620802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efe2e8 00:35:59.927 [2024-12-13 12:41:27.621550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-13 12:41:27.621569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.629738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eefae0 00:36:00.187 [2024-12-13 12:41:27.630473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-13 12:41:27.630491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.638263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee88f8 00:36:00.187 [2024-12-13 12:41:27.638890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-13 12:41:27.638908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.647663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eecc78 00:36:00.187 [2024-12-13 12:41:27.648397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-13 12:41:27.648416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.657215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef1868 00:36:00.187 [2024-12-13 12:41:27.658174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-13 12:41:27.658192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.666485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee2c28 00:36:00.187 [2024-12-13 12:41:27.667004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-13 12:41:27.667022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.677196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee9e10 00:36:00.187 [2024-12-13 12:41:27.678455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-13 12:41:27.678474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.187 [2024-12-13 12:41:27.685687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef20d8 00:36:00.188 [2024-12-13 12:41:27.686845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.686863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.695028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0350 00:36:00.188 [2024-12-13 12:41:27.696199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.696218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.703972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee88f8 00:36:00.188 [2024-12-13 12:41:27.705181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.705199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.711438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee9e10 00:36:00.188 [2024-12-13 12:41:27.712129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.712147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.720626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeaab8 00:36:00.188 [2024-12-13 12:41:27.721526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.721544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.729089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee5658 00:36:00.188 [2024-12-13 12:41:27.729943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.729962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.738103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efd208 00:36:00.188 [2024-12-13 12:41:27.738883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.738901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.748289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ede8a8 00:36:00.188 [2024-12-13 12:41:27.749337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.749364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.757191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ede8a8 00:36:00.188 [2024-12-13 12:41:27.758199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.758217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.766491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efd208 00:36:00.188 [2024-12-13 12:41:27.767805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.767823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.774521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee1f80 00:36:00.188 [2024-12-13 12:41:27.775126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.775145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.784575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eff3c8 00:36:00.188 [2024-12-13 12:41:27.785712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.785730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.792149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7970 00:36:00.188 [2024-12-13 12:41:27.792785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.792802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.800916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efd640 00:36:00.188 [2024-12-13 12:41:27.801572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.801591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.809896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eed920 00:36:00.188 [2024-12-13 12:41:27.810565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.810583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.818722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb048 00:36:00.188 [2024-12-13 12:41:27.819391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.819411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.827681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb048 00:36:00.188 [2024-12-13 12:41:27.828357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.828375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.836541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb048 00:36:00.188 [2024-12-13 12:41:27.837181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.837199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.845061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee1f80 00:36:00.188 [2024-12-13 12:41:27.845742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.845763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.855044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb048 00:36:00.188 [2024-12-13 12:41:27.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.856228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.863917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efac10 00:36:00.188 [2024-12-13 12:41:27.864877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.864894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.873216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efe720 00:36:00.188 [2024-12-13 12:41:27.874270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.874288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.188 [2024-12-13 12:41:27.882528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef8618 00:36:00.188 [2024-12-13 12:41:27.883767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-13 12:41:27.883792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.891137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee84c0 00:36:00.449 [2024-12-13 12:41:27.892332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.892351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.900451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee0a68 00:36:00.449 [2024-12-13 12:41:27.901729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.901747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.909761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef4b08 00:36:00.449 [2024-12-13 12:41:27.911200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.911217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.916264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee5220 00:36:00.449 [2024-12-13 12:41:27.916978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.916996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.925321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee84c0 00:36:00.449 [2024-12-13 12:41:27.926011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.926029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.934518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee2c28 00:36:00.449 [2024-12-13 12:41:27.935237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.943744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee23b8 00:36:00.449 [2024-12-13 12:41:27.944468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.944486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.952793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef92c0 00:36:00.449 [2024-12-13 12:41:27.953526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.953544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.961844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef8e88 00:36:00.449 [2024-12-13 12:41:27.962563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.962581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.970838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee95a0 00:36:00.449 [2024-12-13 12:41:27.971575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.971594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.979762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6fa8 00:36:00.449 [2024-12-13 12:41:27.980503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.980520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.988679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef2948 00:36:00.449 [2024-12-13 12:41:27.989386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.989403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:27.997548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6b70 00:36:00.449 [2024-12-13 12:41:27.998255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:27.998273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.006457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeee38 00:36:00.449 [2024-12-13 12:41:28.007222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.007240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.015437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee01f8 00:36:00.449 [2024-12-13 12:41:28.016190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.016208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.024390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee88f8 00:36:00.449 [2024-12-13 12:41:28.025132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.025149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.033248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef3e60 00:36:00.449 [2024-12-13 12:41:28.033953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.033970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.042167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef9b30 00:36:00.449 [2024-12-13 12:41:28.042876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.042894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.051126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016edf988 00:36:00.449 [2024-12-13 12:41:28.051885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.051903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.060281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee9168 00:36:00.449 [2024-12-13 12:41:28.060990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.061008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.069274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef0350 00:36:00.449 [2024-12-13 12:41:28.070018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.070036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.449 [2024-12-13 12:41:28.078280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efdeb0 00:36:00.449 [2024-12-13 12:41:28.079038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.449 [2024-12-13 12:41:28.079055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.087236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ede8a8 00:36:00.450 [2024-12-13 12:41:28.087972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.087990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.096153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eebfd0 00:36:00.450 [2024-12-13 12:41:28.096913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.096932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.105233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ede470 00:36:00.450 [2024-12-13 12:41:28.105927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.105944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.114146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7100 00:36:00.450 [2024-12-13 12:41:28.114856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.114873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.123118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efc998 00:36:00.450 [2024-12-13 12:41:28.123847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.123865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.132021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eff3c8 00:36:00.450 [2024-12-13 12:41:28.132736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.132754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.450 [2024-12-13 12:41:28.141144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efeb58 00:36:00.450 [2024-12-13 12:41:28.141956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.450 [2024-12-13 12:41:28.141974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.710 [2024-12-13 12:41:28.149725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eeff18 00:36:00.710 [2024-12-13 12:41:28.150545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.710 [2024-12-13 12:41:28.150563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.710 [2024-12-13 12:41:28.159805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eea248 00:36:00.710 [2024-12-13 12:41:28.160790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.710 [2024-12-13 12:41:28.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.710 [2024-12-13 12:41:28.168827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee5a90 00:36:00.710 [2024-12-13 12:41:28.169806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.710 [2024-12-13 12:41:28.169825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.710 [2024-12-13 12:41:28.177874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef4f40 00:36:00.710 [2024-12-13 12:41:28.178868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.710 [2024-12-13 12:41:28.178886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.710 [2024-12-13 12:41:28.186924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef46d0 00:36:00.710 [2024-12-13 12:41:28.187904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.710 [2024-12-13 12:41:28.187922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.710 [2024-12-13 12:41:28.196040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6cc8 00:36:00.710 [2024-12-13 12:41:28.197024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.710 [2024-12-13 12:41:28.197042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.205095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee99d8 00:36:00.711 [2024-12-13 12:41:28.206073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.206091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.214042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee4de8 00:36:00.711 [2024-12-13 12:41:28.215014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.215032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.222975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb8b8 00:36:00.711 [2024-12-13 12:41:28.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.223968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.232201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee4578 00:36:00.711 [2024-12-13 12:41:28.233285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.233303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.240589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee49b0 00:36:00.711 [2024-12-13 12:41:28.241570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.241588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.249027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efcdd0 00:36:00.711 [2024-12-13 12:41:28.249946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.249964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.258319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef96f8 00:36:00.711 [2024-12-13 12:41:28.259357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.259375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.266623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef92c0 00:36:00.711 [2024-12-13 12:41:28.267343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.267360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.275385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee6738 00:36:00.711 [2024-12-13 12:41:28.276106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.276123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.286468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efd208 00:36:00.711 [2024-12-13 12:41:28.287869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.287887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.294749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efac10 00:36:00.711 [2024-12-13 12:41:28.295803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.295838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.302794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee0ea0 00:36:00.711 [2024-12-13 12:41:28.304256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.304274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.310642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee88f8 00:36:00.711 [2024-12-13 12:41:28.311332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.320557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef8e88 00:36:00.711 [2024-12-13 12:41:28.321426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.321444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.329803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efb048 00:36:00.711 [2024-12-13 12:41:28.330723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.330741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.339132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee8d30 00:36:00.711 [2024-12-13 12:41:28.340202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.340220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.348476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6458 00:36:00.711 [2024-12-13 12:41:28.349680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.349699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.355921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016eedd58 00:36:00.711 [2024-12-13 12:41:28.356575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.356593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.365039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef92c0 00:36:00.711 [2024-12-13 12:41:28.365529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.365547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.375182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef8e88 00:36:00.711 [2024-12-13 12:41:28.376482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.383495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ee49b0 00:36:00.711 [2024-12-13 12:41:28.384461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.384479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.392294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef6020 00:36:00.711 [2024-12-13 12:41:28.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.393306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.711 [2024-12-13 12:41:28.401206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016efc998 00:36:00.711 [2024-12-13 12:41:28.402061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.711 [2024-12-13 12:41:28.402079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.971 [2024-12-13 12:41:28.410282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b0e0) with pdu=0x200016ef7100 00:36:00.971 [2024-12-13 12:41:28.411253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.971 [2024-12-13 12:41:28.411271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.971 28441.50 IOPS, 111.10 MiB/s 00:36:00.971 Latency(us) 00:36:00.971 [2024-12-13T11:41:28.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.971 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:00.971 nvme0n1 : 2.00 28448.90 111.13 0.00 0.00 4493.71 1778.83 13044.78 00:36:00.971 [2024-12-13T11:41:28.671Z] =================================================================================================================== 00:36:00.971 [2024-12-13T11:41:28.671Z] Total : 28448.90 111.13 0.00 0.00 4493.71 1778.83 13044.78 00:36:00.971 { 00:36:00.971 "results": [ 00:36:00.971 { 00:36:00.971 "job": "nvme0n1", 00:36:00.971 "core_mask": "0x2", 00:36:00.971 "workload": "randwrite", 00:36:00.971 "status": "finished", 00:36:00.971 "queue_depth": 128, 00:36:00.971 "io_size": 4096, 00:36:00.971 "runtime": 2.003979, 00:36:00.971 "iops": 28448.9009116363, 00:36:00.971 "mibps": 111.1285191860793, 00:36:00.971 "io_failed": 0, 00:36:00.971 "io_timeout": 0, 00:36:00.971 "avg_latency_us": 4493.710617516586, 00:36:00.971 "min_latency_us": 1778.8342857142857, 00:36:00.971 "max_latency_us": 13044.784761904762 00:36:00.971 } 00:36:00.971 ], 00:36:00.971 "core_count": 1 00:36:00.971 } 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:00.971 | .driver_specific 00:36:00.971 | .nvme_error 00:36:00.971 | .status_code 00:36:00.971 | .command_transient_transport_error' 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 223 > 0 )) 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530646 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530646 ']' 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530646 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.971 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530646 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530646' 00:36:01.231 killing process with pid 530646 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530646 00:36:01.231 Received shutdown signal, test time was about 2.000000 seconds 00:36:01.231 00:36:01.231 Latency(us) 00:36:01.231 [2024-12-13T11:41:28.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.231 [2024-12-13T11:41:28.931Z] =================================================================================================================== 00:36:01.231 [2024-12-13T11:41:28.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530646 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531119 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531119 /var/tmp/bperf.sock 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531119 ']' 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:01.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.231 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.231 [2024-12-13 12:41:28.894099] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:01.231 [2024-12-13 12:41:28.894147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531119 ] 00:36:01.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:01.231 Zero copy mechanism will not be used. 00:36:01.490 [2024-12-13 12:41:28.967502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.490 [2024-12-13 12:41:28.989879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.490 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.490 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:01.490 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:01.490 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:01.749 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:01.749 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.749 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.749 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.749 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.749 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:02.009 nvme0n1 00:36:02.009 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:02.009 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.009 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.009 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.009 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:02.009 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:02.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:02.269 Zero copy mechanism will not be used. 00:36:02.269 Running I/O for 2 seconds... 00:36:02.269 [2024-12-13 12:41:29.789711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.789820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.789847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.794839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.794909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.794929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.799706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.799796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.799815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.804533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.804609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.804628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.809143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.809224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.809242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.814062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.814146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.814165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.818853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.818932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.818950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.269 [2024-12-13 12:41:29.823576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.269 [2024-12-13 12:41:29.823653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.269 [2024-12-13 12:41:29.823671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.828355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.828425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.828443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.833139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.833216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.833233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.837807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.837890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.837908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.842368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.842446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.842464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.846854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.846924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.846941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.851411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.851485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.851506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.855717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.855965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.855985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.860643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.860914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.860933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.865778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.866050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.866069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.870873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.871134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.871153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.875779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.876040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.876059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.880704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.880969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.880988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.885354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.885600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.885619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.890354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.890609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.890628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.895942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.896198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.896217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.900890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.901134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.901154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.905536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.905732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.909995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.910239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.910259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.914333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.914583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.914602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.918820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.919073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.919091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.923297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.923542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.923561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.927759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.928009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.928029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.932264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.932502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.932521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.936760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.937002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.937021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.941365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.941606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.941625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.945893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.946144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.270 [2024-12-13 12:41:29.950419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.270 [2024-12-13 12:41:29.950670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-13 12:41:29.950688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.271 [2024-12-13 12:41:29.954932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.271 [2024-12-13 12:41:29.955179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-13 12:41:29.955198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.271 [2024-12-13 12:41:29.959261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.271 [2024-12-13 12:41:29.959519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-13 12:41:29.959538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.271 [2024-12-13 12:41:29.963606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.271 [2024-12-13 12:41:29.963845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-13 12:41:29.963865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.271 [2024-12-13 12:41:29.967987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.968225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.968245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.972420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.972653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.972676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.976742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.976996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.977016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.981036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.981282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.981301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.985332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.985583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.985602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.989651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.989899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.989918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.993984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.994233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.994252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:29.998697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:29.998948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:29.998968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.003574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.003837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.003857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.007923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.008155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.008174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.012273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.012528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.012548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.016712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.016964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.016983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.021029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.021282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.021301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.025303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.025541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.025561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.029559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.029817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.029837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.033867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.034096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.034116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.038836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.039064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.039085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.043523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.043752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.043770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.048275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.048513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.048533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.053113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.053349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.053368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.057821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.058069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.058088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.062707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.062948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.062967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.068279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.068503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.068521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.073216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.073446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.077835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.078061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.082266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.082514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.082534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.086501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.532 [2024-12-13 12:41:30.086750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-13 12:41:30.086769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.532 [2024-12-13 12:41:30.090710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.090947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.090970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.094975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.095217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.095236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.099191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.099432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.099450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.103682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.103914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.103933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.108612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.108856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.108875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.113914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.114144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.114164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.119185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.119417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.119436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.123680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.123919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.123938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.128250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.128485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.128504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.132706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.132958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.132978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.137037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.137270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.137289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.141563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.141798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.141817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.146077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.146305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.146324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.151191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.151415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.151435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.156196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.156426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.156445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.161068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.161307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.161327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.166034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.166263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.166283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.171474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.171702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.171721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.175955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.176190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.176209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.180848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.181081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.181100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.185687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.185924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.185943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.190455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.190667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.190685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.195028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.195253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.195272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.199754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.199993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.200013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.204661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.204898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.204918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.209687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.209912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.209931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.215424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.215671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.533 [2024-12-13 12:41:30.215693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.533 [2024-12-13 12:41:30.221447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.533 [2024-12-13 12:41:30.221746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.534 [2024-12-13 12:41:30.221765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.534 [2024-12-13 12:41:30.228188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.534 [2024-12-13 12:41:30.228450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.534 [2024-12-13 12:41:30.228470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.234734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.235057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.235076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.242144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.242405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.242424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.249756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.250002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.250022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.256090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.256312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.256332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.261385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.261607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.261626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.266430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.266667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.266686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.271336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.271574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.271594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.276249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.276475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.276494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.281044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.281273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.281292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.285824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.286061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.286080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.290664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.290903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.290922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.295398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.295620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.295639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.300155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.300382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.300401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.304658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.304891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.304910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.309039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.309261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.309280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.314170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.314493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.314513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.320063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.320304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.795 [2024-12-13 12:41:30.320323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.795 [2024-12-13 12:41:30.324917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.795 [2024-12-13 12:41:30.325239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.325258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.330860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.331162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.331181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.337174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.337466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.337485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.343473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.343768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.343793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.349417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.349703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.349722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.355323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.355653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.355673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.361435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.361721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.361744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.367488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.367800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.367820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.374290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.374543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.374562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.379824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.380064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.380083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.384491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.384755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.384774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.389647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.389888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.389913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.394520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.394788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.394807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.399301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.399575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.399594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.404115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.404358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.404378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.408856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.409125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.409143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.413473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.413726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.413746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.418135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.418407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.418426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.422808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.423121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.427643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.427957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.427976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.432385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.432638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.432657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.437147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.437428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.441728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.441998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.442017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.446360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.446636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.446655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.451075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.451380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.451399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.455969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.456255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.456274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.460760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.461003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.461022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.796 [2024-12-13 12:41:30.465950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.796 [2024-12-13 12:41:30.466198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.796 [2024-12-13 12:41:30.466218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.797 [2024-12-13 12:41:30.471096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.797 [2024-12-13 12:41:30.471321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.797 [2024-12-13 12:41:30.471340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.797 [2024-12-13 12:41:30.476579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.797 [2024-12-13 12:41:30.476853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.797 [2024-12-13 12:41:30.476873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.797 [2024-12-13 12:41:30.482240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.797 [2024-12-13 12:41:30.482482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.797 [2024-12-13 12:41:30.482501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.797 [2024-12-13 12:41:30.487347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.797 [2024-12-13 12:41:30.487600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.797 [2024-12-13 12:41:30.487618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.797 [2024-12-13 12:41:30.492541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:02.797 [2024-12-13 12:41:30.492770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.797 [2024-12-13 12:41:30.492800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.497444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.497689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.497708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.502873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.503094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.503113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.508293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.508536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.508555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.513669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.513917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.513936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.518537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.518774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.518799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.523561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.523810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.529381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.529614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.529633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.534838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.535073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.535092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.540650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.540917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.540936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.547547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.547830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.547849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.057 [2024-12-13 12:41:30.554400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.057 [2024-12-13 12:41:30.554640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.057 [2024-12-13 12:41:30.554660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.560840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.561083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.561102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.567317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.567544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.567563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.573230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.573455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.573474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.578267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.578495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.578514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.583168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.583398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.583417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.588274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.588516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.588535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.593095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.593343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.593363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.597958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.598204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.598224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.602378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.602625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.602644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.607705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.607951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.607970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.612630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.612814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.612833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.617180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.617421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.617440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.621570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.621830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.621850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.625767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.626017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.626037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.630137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.630388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.630410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.634656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.634920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.634940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.639216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.639458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.639477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.643588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.643853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.643871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.648002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.648241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.648260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.652510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.652755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.652774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.657032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.657268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.657288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.661446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.661691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.661710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.665883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.666131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.666150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.670325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.670576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.670595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.674678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.674933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.674953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.679126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.679385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.679404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.683653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.683921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.683940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.688796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.689025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.689044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.694106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.694360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.694379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.698872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.699120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.699139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.703566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.703815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.703835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.708117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.708364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.708384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.712654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.712903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.712922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.058 [2024-12-13 12:41:30.717661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.058 [2024-12-13 12:41:30.717927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.058 [2024-12-13 12:41:30.717945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.722800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.723319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.723337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.728035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.728276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.728295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.732985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.733229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.733248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.738884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.739120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.739139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.744155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.744401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.744420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.749193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.749420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.059 [2024-12-13 12:41:30.753628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.059 [2024-12-13 12:41:30.753877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.059 [2024-12-13 12:41:30.753899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.758070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.758309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.758328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.762326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.762586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.766533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.766798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.766816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.771031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.771261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.771280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.775281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.775515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.775533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.779563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.779788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.779806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.319 6258.00 IOPS, 782.25 MiB/s [2024-12-13T11:41:31.019Z] [2024-12-13 12:41:30.784856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.784923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.784941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.789299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.789352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.789370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.793712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.793790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.793809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.798346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.798417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.798450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.803291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.803366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.803384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.808086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.808190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.808208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.813463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.813537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.819752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.819912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.819931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.827303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.827372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.827390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.833487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.833560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.833578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.839873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.839949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.839967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.846535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.846702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.846720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.853589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.319 [2024-12-13 12:41:30.853666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.319 [2024-12-13 12:41:30.853683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.319 [2024-12-13 12:41:30.860510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.860653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.860672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.868030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.868164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.868182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.875270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.875429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.875447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.881221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.881295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.886360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.886459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.886476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.891465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.891548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.891566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.896793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.896886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.896911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.902233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.902324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.902344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.908210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.908281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.908299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.914062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.914185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.914202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.920645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.920894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.920913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.927922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.928177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.928196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.933792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.934060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.934080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.940132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.940348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.940368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.946144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.946388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.946407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.952676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.952927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.952946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.957821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.958058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.958077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.962528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.962767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.962790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.967326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.967565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.967583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.972315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.972548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.972567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.976830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.977074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.977093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.981317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.981557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.981576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.985714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.985951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.985970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.990001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.990232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.990251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.994316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.994545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.994564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:30.999132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:30.999367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:30.999386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:31.004242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:31.004527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:31.004546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.320 [2024-12-13 12:41:31.010824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.320 [2024-12-13 12:41:31.011084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.320 [2024-12-13 12:41:31.011104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.321 [2024-12-13 12:41:31.016134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.321 [2024-12-13 12:41:31.016382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.321 [2024-12-13 12:41:31.016402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.021098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.021357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.021376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.026061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.026310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.026329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.031021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.031247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.031266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.036229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.036449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.036472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.041480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.041741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.041761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.046997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.047251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.047270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.052547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.052794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.052812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.057852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.058071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.058091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.062950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.063175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.063194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.068150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.068400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.068419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.072712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.072963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.072983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.077123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.077373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.077392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.581 [2024-12-13 12:41:31.081437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.581 [2024-12-13 12:41:31.081685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.581 [2024-12-13 12:41:31.081704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.085698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.085951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.085970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.090104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.090330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.090349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.094530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.094756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.094775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.099030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.099259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.099278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.103419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.103648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.103667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.107869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.108087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.108106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.112235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.112509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.112527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.116438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.116684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.116703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.120714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.120954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.120973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.125205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.125425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.125444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.130357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.130574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.130593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.135108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.135332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.135350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.139823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.140056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.140077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.145634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.145947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.145966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.152689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.152987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.153006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.158625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.158882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.163692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.163943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.163967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.168368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.168598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.168617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.172690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.172945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.172965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.177003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.177263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.177281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.181360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.181602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.181621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.185674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.185933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.185952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.190606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.190980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.190999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.196579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.196927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.196946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.201820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.202109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.202127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.582 [2024-12-13 12:41:31.207411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.582 [2024-12-13 12:41:31.207538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.582 [2024-12-13 12:41:31.207555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.212157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.212384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.212403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.216351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.216578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.220571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.220811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.220830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.224792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.225020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.225038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.229055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.229284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.229303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.233248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.233490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.233508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.237548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.237791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.237810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.241766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.242004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.242023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.245983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.246235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.246254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.250230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.250471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.250489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.254471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.254701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.254719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.258633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.258879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.258898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.262875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.263108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.263127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.267081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.267331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.267350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.271270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.271506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.271524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.583 [2024-12-13 12:41:31.275434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.583 [2024-12-13 12:41:31.275687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.583 [2024-12-13 12:41:31.275706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.279630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.279895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.279918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.283890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.284140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.284159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.288154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.288376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.288395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.292515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.292740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.292759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.296945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.297185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.297204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.301389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.301621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.301640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.305815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.306078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.306097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.310235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.310489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.314758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.315002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.315021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.319213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.319460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.319479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.323558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.323807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.323826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.328039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.328272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.328291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.332480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.332726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.332745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.336955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.337205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.337224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.341440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.341689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.341708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.345846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.346103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.346122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.350560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.350810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.350829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.355064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.355308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.355327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.359478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.359716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.359735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.364022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.364262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.364280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.844 [2024-12-13 12:41:31.368430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.844 [2024-12-13 12:41:31.368675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.844 [2024-12-13 12:41:31.368693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.372914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.373146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.377332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.377573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.377591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.381475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.381711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.381730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.385615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.385877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.389743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.389982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.390001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.394238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.394491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.394515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.399851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.400177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.400197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.405795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.406098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.406117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.411675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.412019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.412037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.418140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.418469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.418488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.424177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.424510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.424529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.430023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.430341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.430359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.436048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.436350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.436369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.442372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.442713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.442732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.448590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.448919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.448938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.454869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.455120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.455140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.460875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.461165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.461184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.466882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.467246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.467265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.473073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.473383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.473402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.478662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.478958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.478976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.484677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.484912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.484931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.489686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.489896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.489914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.495007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.495293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.495312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.500886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.501194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.501213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.506479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.506787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.506806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.511278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.511504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.511523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.516788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.517074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.845 [2024-12-13 12:41:31.517092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.845 [2024-12-13 12:41:31.521823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.845 [2024-12-13 12:41:31.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.846 [2024-12-13 12:41:31.522054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.846 [2024-12-13 12:41:31.526354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.846 [2024-12-13 12:41:31.526549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.846 [2024-12-13 12:41:31.526573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.846 [2024-12-13 12:41:31.530789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.846 [2024-12-13 12:41:31.531005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.846 [2024-12-13 12:41:31.531024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.846 [2024-12-13 12:41:31.535220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.846 [2024-12-13 12:41:31.535417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.846 [2024-12-13 12:41:31.535435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.846 [2024-12-13 12:41:31.539877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:03.846 [2024-12-13 12:41:31.540090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.846 [2024-12-13 12:41:31.540112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.544205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.544421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.544439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.548475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.548696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.548715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.552872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.553080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.553099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.557058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.557299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.561364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.561567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.561586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.566798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.567097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.567117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.572073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.572287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.572306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.576392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.576603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.576622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.580799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.581013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.581032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.585158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.585383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.585402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.589390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.589613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.589632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.593858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.594054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.594072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.598377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.598572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.598595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.602692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.602913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.602930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.607083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.607294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.607313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.611368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.611578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.611597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.615791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.615984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.620630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.620837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.620855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.625158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.625368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.625386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.629947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.630154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.630173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.634571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.634749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.634766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.639088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.639284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.639308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.643970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.644167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.644185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.107 [2024-12-13 12:41:31.648542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.107 [2024-12-13 12:41:31.648749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.107 [2024-12-13 12:41:31.648767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.653292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.653398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.653416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.657719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.657917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.657937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.661885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.662081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.662105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.665965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.666196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.666214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.669864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.670068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.670086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.673963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.674155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.674172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.678078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.678282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.678300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.682182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.682382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.682401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.685971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.686197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.686215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.689817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.690020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.690037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.693581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.693805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.693821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.697398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.697601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.697619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.701200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.701409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.701428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.704982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.705201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.705219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.708862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.709058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.709082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.713908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.714211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.714229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.719895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.720132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.720150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.726569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.726907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.726926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.732822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.733110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.733130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.739151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.739427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.739446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.745713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.745925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.745949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.751744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.752018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.752037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.758009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.758278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.758297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.764369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.764510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.764528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.770313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.770505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.770523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.776384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.776597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.776616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.108 [2024-12-13 12:41:31.783118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x178b420) with pdu=0x200016eff3c8 00:36:04.108 [2024-12-13 12:41:31.783316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.108 [2024-12-13 12:41:31.783334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.108 6236.50 IOPS, 779.56 MiB/s 00:36:04.109 Latency(us) 00:36:04.109 [2024-12-13T11:41:31.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.109 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:04.109 nvme0n1 : 2.01 6230.02 778.75 0.00 0.00 2562.91 1794.44 11796.48 00:36:04.109 [2024-12-13T11:41:31.809Z] =================================================================================================================== 00:36:04.109 [2024-12-13T11:41:31.809Z] Total : 6230.02 778.75 0.00 0.00 2562.91 1794.44 11796.48 00:36:04.109 { 00:36:04.109 "results": [ 00:36:04.109 { 00:36:04.109 "job": "nvme0n1", 00:36:04.109 "core_mask": "0x2", 00:36:04.109 "workload": "randwrite", 00:36:04.109 "status": "finished", 00:36:04.109 "queue_depth": 16, 00:36:04.109 "io_size": 131072, 00:36:04.109 "runtime": 2.005292, 00:36:04.109 "iops": 6230.0153793063555, 00:36:04.109 "mibps": 778.7519224132944, 00:36:04.109 "io_failed": 0, 00:36:04.109 "io_timeout": 0, 00:36:04.109 "avg_latency_us": 2562.912911992621, 00:36:04.109 "min_latency_us": 1794.4380952380952, 00:36:04.109 "max_latency_us": 11796.48 00:36:04.109 } 00:36:04.109 ], 00:36:04.109 "core_count": 1 00:36:04.109 } 00:36:04.368 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:04.368 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:04.368 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:04.368 | .driver_specific 00:36:04.368 | .nvme_error 00:36:04.368 | .status_code 00:36:04.368 | .command_transient_transport_error' 00:36:04.368 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 403 > 0 )) 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531119 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531119 ']' 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531119 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531119 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531119' 00:36:04.368 killing process with pid 531119 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531119 00:36:04.368 Received shutdown signal, test time was about 2.000000 seconds 00:36:04.368 00:36:04.368 Latency(us) 00:36:04.368 [2024-12-13T11:41:32.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.368 [2024-12-13T11:41:32.068Z] =================================================================================================================== 00:36:04.368 [2024-12-13T11:41:32.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:04.368 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531119 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 529495 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529495 ']' 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529495 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529495 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529495' 00:36:04.627 killing process with pid 529495 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529495 00:36:04.627 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529495 00:36:04.886 00:36:04.886 real 0m13.952s 00:36:04.886 user 0m26.774s 00:36:04.886 sys 0m4.451s 00:36:04.886 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.886 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.886 ************************************ 00:36:04.886 END TEST nvmf_digest_error 00:36:04.887 ************************************ 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:04.887 rmmod nvme_tcp 00:36:04.887 rmmod nvme_fabrics 00:36:04.887 rmmod nvme_keyring 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 529495 ']' 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 529495 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 529495 ']' 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 529495 00:36:04.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (529495) - No such process 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 529495 is not found' 00:36:04.887 Process with pid 529495 is not found 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.887 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:07.426 00:36:07.426 real 0m36.163s 00:36:07.426 user 0m54.951s 00:36:07.426 sys 0m13.606s 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.426 ************************************ 00:36:07.426 END TEST nvmf_digest 00:36:07.426 ************************************ 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.426 ************************************ 00:36:07.426 START TEST nvmf_bdevperf 00:36:07.426 ************************************ 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:07.426 * Looking for test storage... 00:36:07.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:07.426 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:07.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.427 --rc genhtml_branch_coverage=1 00:36:07.427 --rc genhtml_function_coverage=1 00:36:07.427 --rc genhtml_legend=1 00:36:07.427 --rc geninfo_all_blocks=1 00:36:07.427 --rc geninfo_unexecuted_blocks=1 00:36:07.427 00:36:07.427 ' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:07.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.427 --rc genhtml_branch_coverage=1 00:36:07.427 --rc genhtml_function_coverage=1 00:36:07.427 --rc genhtml_legend=1 00:36:07.427 --rc geninfo_all_blocks=1 00:36:07.427 --rc geninfo_unexecuted_blocks=1 00:36:07.427 00:36:07.427 ' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:07.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.427 --rc genhtml_branch_coverage=1 00:36:07.427 --rc genhtml_function_coverage=1 00:36:07.427 --rc genhtml_legend=1 00:36:07.427 --rc geninfo_all_blocks=1 00:36:07.427 --rc geninfo_unexecuted_blocks=1 00:36:07.427 00:36:07.427 ' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:07.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.427 --rc genhtml_branch_coverage=1 00:36:07.427 --rc genhtml_function_coverage=1 00:36:07.427 --rc genhtml_legend=1 00:36:07.427 --rc geninfo_all_blocks=1 00:36:07.427 --rc geninfo_unexecuted_blocks=1 00:36:07.427 00:36:07.427 ' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:07.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.427 12:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.000 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:14.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:14.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:14.001 Found net devices under 0000:af:00.0: cvl_0_0 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:14.001 Found net devices under 0000:af:00.1: cvl_0_1 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:36:14.001 00:36:14.001 --- 10.0.0.2 ping statistics --- 00:36:14.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.001 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:36:14.001 00:36:14.001 --- 10.0.0.1 ping statistics --- 00:36:14.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.001 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:14.001 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=535263 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 535263 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 535263 ']' 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.002 12:41:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 [2024-12-13 12:41:40.932083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:14.002 [2024-12-13 12:41:40.932132] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.002 [2024-12-13 12:41:41.009900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:14.002 [2024-12-13 12:41:41.032465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.002 [2024-12-13 12:41:41.032501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.002 [2024-12-13 12:41:41.032508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.002 [2024-12-13 12:41:41.032513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.002 [2024-12-13 12:41:41.032518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.002 [2024-12-13 12:41:41.033774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:14.002 [2024-12-13 12:41:41.033824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.002 [2024-12-13 12:41:41.033825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 [2024-12-13 12:41:41.164377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 Malloc0 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:14.002 [2024-12-13 12:41:41.225008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.002 { 00:36:14.002 "params": { 00:36:14.002 "name": "Nvme$subsystem", 00:36:14.002 "trtype": "$TEST_TRANSPORT", 00:36:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.002 "adrfam": "ipv4", 00:36:14.002 "trsvcid": "$NVMF_PORT", 00:36:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.002 "hdgst": ${hdgst:-false}, 00:36:14.002 "ddgst": ${ddgst:-false} 00:36:14.002 }, 00:36:14.002 "method": "bdev_nvme_attach_controller" 00:36:14.002 } 00:36:14.002 EOF 00:36:14.002 )") 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:14.002 12:41:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:14.002 "params": { 00:36:14.002 "name": "Nvme1", 00:36:14.002 "trtype": "tcp", 00:36:14.002 "traddr": "10.0.0.2", 00:36:14.002 "adrfam": "ipv4", 00:36:14.002 "trsvcid": "4420", 00:36:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:14.002 "hdgst": false, 00:36:14.002 "ddgst": false 00:36:14.002 }, 00:36:14.002 "method": "bdev_nvme_attach_controller" 00:36:14.002 }' 00:36:14.002 [2024-12-13 12:41:41.278232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:14.002 [2024-12-13 12:41:41.278273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535293 ] 00:36:14.002 [2024-12-13 12:41:41.352799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.002 [2024-12-13 12:41:41.375119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.002 Running I/O for 1 seconds... 00:36:14.940 11466.00 IOPS, 44.79 MiB/s 00:36:14.940 Latency(us) 00:36:14.940 [2024-12-13T11:41:42.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:14.940 Verification LBA range: start 0x0 length 0x4000 00:36:14.940 Nvme1n1 : 1.01 11490.73 44.89 0.00 0.00 11097.26 1435.55 12108.56 00:36:14.940 [2024-12-13T11:41:42.640Z] =================================================================================================================== 00:36:14.940 [2024-12-13T11:41:42.640Z] Total : 11490.73 44.89 0.00 0.00 11097.26 1435.55 12108.56 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=535514 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.199 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.199 { 00:36:15.199 "params": { 00:36:15.199 "name": "Nvme$subsystem", 00:36:15.199 "trtype": "$TEST_TRANSPORT", 00:36:15.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.200 "adrfam": "ipv4", 00:36:15.200 "trsvcid": "$NVMF_PORT", 00:36:15.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.200 "hdgst": ${hdgst:-false}, 00:36:15.200 "ddgst": ${ddgst:-false} 00:36:15.200 }, 00:36:15.200 "method": "bdev_nvme_attach_controller" 00:36:15.200 } 00:36:15.200 EOF 00:36:15.200 )") 00:36:15.200 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:15.200 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:15.200 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:15.200 12:41:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.200 "params": { 00:36:15.200 "name": "Nvme1", 00:36:15.200 "trtype": "tcp", 00:36:15.200 "traddr": "10.0.0.2", 00:36:15.200 "adrfam": "ipv4", 00:36:15.200 "trsvcid": "4420", 00:36:15.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.200 "hdgst": false, 00:36:15.200 "ddgst": false 00:36:15.200 }, 00:36:15.200 "method": "bdev_nvme_attach_controller" 00:36:15.200 }' 00:36:15.200 [2024-12-13 12:41:42.779703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:15.200 [2024-12-13 12:41:42.779747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535514 ] 00:36:15.200 [2024-12-13 12:41:42.851724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.200 [2024-12-13 12:41:42.871794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.459 Running I/O for 15 seconds... 00:36:17.772 11592.00 IOPS, 45.28 MiB/s [2024-12-13T11:41:46.044Z] 11593.50 IOPS, 45.29 MiB/s [2024-12-13T11:41:46.044Z] 12:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 535263 00:36:18.344 12:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:18.344 [2024-12-13 12:41:45.747705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.747985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.747995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:18.345 [2024-12-13 12:41:45.748250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.345 [2024-12-13 12:41:45.748484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.345 [2024-12-13 12:41:45.748492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.748987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.748995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.749015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.749029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.749044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.749058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.749073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.346 [2024-12-13 12:41:45.749087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.346 [2024-12-13 12:41:45.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.347 [2024-12-13 12:41:45.749660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.347 [2024-12-13 12:41:45.749667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.348 [2024-12-13 12:41:45.749674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.749681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.348 [2024-12-13 12:41:45.749687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.749697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.348 [2024-12-13 12:41:45.749704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.749712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.348 [2024-12-13 12:41:45.749718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.749726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.348 [2024-12-13 12:41:45.749732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.749739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.348 [2024-12-13 12:41:45.749746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.749753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f7cb0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.749762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:18.348 [2024-12-13 12:41:45.749767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:18.348 [2024-12-13 12:41:45.749773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115392 len:8 PRP1 0x0 PRP2 0x0 00:36:18.348 [2024-12-13 12:41:45.749785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:18.348 [2024-12-13 12:41:45.752633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.752686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.753259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.753275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.753283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.753457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.753631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.753639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.753647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.753655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.765909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.766359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.766406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.766429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.767035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.767408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.767416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.767423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.767430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.778753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.779184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.779200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.779207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.779367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.779527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.779534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.779541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.779547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.791587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.791992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.792009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.792017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.792185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.792353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.792361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.792367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.792373] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.804447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.804808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.804852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.804875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.805441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.805610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.805618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.805624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.805630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.817321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.817684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.817700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.817707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.817881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.818050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.818058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.818064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.818070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.830136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.830502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.830517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.830524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.830687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.830850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.830859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.830864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.830870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.843000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-13 12:41:45.843417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-13 12:41:45.843432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-13 12:41:45.843439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.348 [2024-12-13 12:41:45.843598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.348 [2024-12-13 12:41:45.843756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-13 12:41:45.843763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-13 12:41:45.843769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-13 12:41:45.843775] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-13 12:41:45.855772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.856171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.856187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.856194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.856352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.856511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.856519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.856525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.856530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.868532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.868949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.868965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.868972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.869131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.869289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.869300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.869306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.869312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.881365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.881705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.881721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.881728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.881913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.882081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.882089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.882095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.882101] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.894178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.894596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.894611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.894618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.894777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.894966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.894974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.894980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.894985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.907032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.907438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.907482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.907504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.908102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.908630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.908638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.908644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.908654] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.919818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.920261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.920307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.920330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.920705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.920889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.920897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.920904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.920910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.932667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.933075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.933091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.933098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.933265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.933433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.933441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.933447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.933453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-13 12:41:45.945523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-13 12:41:45.945930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-13 12:41:45.945946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-13 12:41:45.945953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.349 [2024-12-13 12:41:45.946111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.349 [2024-12-13 12:41:45.946270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-13 12:41:45.946278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-13 12:41:45.946284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-13 12:41:45.946289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:45.958253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:45.958694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:45.958709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:45.958716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:45.958890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:45.959058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:45.959066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:45.959072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:45.959078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:45.971119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:45.971509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:45.971525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:45.971532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:45.971690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:45.971873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:45.971882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:45.971888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:45.971894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:45.983958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:45.984294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:45.984310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:45.984317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:45.984476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:45.984634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:45.984642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:45.984648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:45.984653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:45.996815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:45.997185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:45.997201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:45.997208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:45.997370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:45.997529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:45.997537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:45.997542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:45.997548] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:46.009603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:46.010014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:46.010032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:46.010040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:46.010214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:46.010387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:46.010395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:46.010402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:46.010408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:46.022646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:46.023049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:46.023065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:46.023073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:46.023245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:46.023418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:46.023426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:46.023433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:46.023439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.350 [2024-12-13 12:41:46.035642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.350 [2024-12-13 12:41:46.035979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.350 [2024-12-13 12:41:46.035996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.350 [2024-12-13 12:41:46.036003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.350 [2024-12-13 12:41:46.036176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.350 [2024-12-13 12:41:46.036348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.350 [2024-12-13 12:41:46.036360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.350 [2024-12-13 12:41:46.036366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.350 [2024-12-13 12:41:46.036372] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-13 12:41:46.048542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-13 12:41:46.048997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-13 12:41:46.049014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-13 12:41:46.049021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.611 [2024-12-13 12:41:46.049189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.611 [2024-12-13 12:41:46.049356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-13 12:41:46.049364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-13 12:41:46.049370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-13 12:41:46.049376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-13 12:41:46.061307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-13 12:41:46.061745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-13 12:41:46.061798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-13 12:41:46.061824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.611 [2024-12-13 12:41:46.062408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.611 [2024-12-13 12:41:46.062850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-13 12:41:46.062868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-13 12:41:46.062882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-13 12:41:46.062894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-13 12:41:46.076250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-13 12:41:46.076771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-13 12:41:46.076798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-13 12:41:46.076809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.611 [2024-12-13 12:41:46.077063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.611 [2024-12-13 12:41:46.077319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-13 12:41:46.077330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-13 12:41:46.077339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-13 12:41:46.077352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 10227.33 IOPS, 39.95 MiB/s [2024-12-13T11:41:46.311Z] [2024-12-13 12:41:46.089279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-13 12:41:46.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-13 12:41:46.089725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-13 12:41:46.089732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.611 [2024-12-13 12:41:46.089911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.611 [2024-12-13 12:41:46.090085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-13 12:41:46.090093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-13 12:41:46.090099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-13 12:41:46.090106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-13 12:41:46.102088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-13 12:41:46.102393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-13 12:41:46.102409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-13 12:41:46.102416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.611 [2024-12-13 12:41:46.102575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.611 [2024-12-13 12:41:46.102733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-13 12:41:46.102741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-13 12:41:46.102747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-13 12:41:46.102752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-13 12:41:46.114845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-13 12:41:46.115193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-13 12:41:46.115210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-13 12:41:46.115217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.611 [2024-12-13 12:41:46.115384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.611 [2024-12-13 12:41:46.115552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-13 12:41:46.115561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-13 12:41:46.115567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-13 12:41:46.115573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.127579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.127927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.127943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.127950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.128118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.128285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.128293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.128299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.128305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.140374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.140712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.140727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.140735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.140909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.141077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.141085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.141092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.141097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.153117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.153553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.153596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.153619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.154146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.154513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.154529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.154543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.154555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.167652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.168117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.168138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.168148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.168398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.168643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.168654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.168664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.168673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.180473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.180886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.180903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.180911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.181079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.181247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.181254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.181260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.181266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.193533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.193917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.193935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.193943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.194116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.194290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.194298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.194305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.194311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.206561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.206944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.206961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.206968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.207140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.207313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.207324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.207331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.207337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.219493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.219874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.219919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.219941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.220524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.220725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.220733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.220740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.220746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.232250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.232548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.232564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.232571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.232741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.232916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.232926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.232932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.232938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.245140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.245561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-13 12:41:46.245606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-13 12:41:46.245629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.612 [2024-12-13 12:41:46.246224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.612 [2024-12-13 12:41:46.246627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-13 12:41:46.246634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-13 12:41:46.246640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-13 12:41:46.246650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-13 12:41:46.257895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-13 12:41:46.258298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.613 [2024-12-13 12:41:46.258342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.613 [2024-12-13 12:41:46.258364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.613 [2024-12-13 12:41:46.258961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.613 [2024-12-13 12:41:46.259499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.613 [2024-12-13 12:41:46.259506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.613 [2024-12-13 12:41:46.259512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.613 [2024-12-13 12:41:46.259518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.613 [2024-12-13 12:41:46.270688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.613 [2024-12-13 12:41:46.270981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.613 [2024-12-13 12:41:46.270998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.613 [2024-12-13 12:41:46.271005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.613 [2024-12-13 12:41:46.271172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.613 [2024-12-13 12:41:46.271341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.613 [2024-12-13 12:41:46.271350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.613 [2024-12-13 12:41:46.271355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.613 [2024-12-13 12:41:46.271361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.613 [2024-12-13 12:41:46.283744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.613 [2024-12-13 12:41:46.284165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.613 [2024-12-13 12:41:46.284182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.613 [2024-12-13 12:41:46.284189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.613 [2024-12-13 12:41:46.284356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.613 [2024-12-13 12:41:46.284525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.613 [2024-12-13 12:41:46.284532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.613 [2024-12-13 12:41:46.284539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.613 [2024-12-13 12:41:46.284545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.613 [2024-12-13 12:41:46.296737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.613 [2024-12-13 12:41:46.297070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.613 [2024-12-13 12:41:46.297086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.613 [2024-12-13 12:41:46.297093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.613 [2024-12-13 12:41:46.297262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.613 [2024-12-13 12:41:46.297430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.613 [2024-12-13 12:41:46.297438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.613 [2024-12-13 12:41:46.297444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.613 [2024-12-13 12:41:46.297449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-13 12:41:46.309680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-13 12:41:46.309959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-13 12:41:46.309976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-13 12:41:46.309983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.874 [2024-12-13 12:41:46.310170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.874 [2024-12-13 12:41:46.310343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-13 12:41:46.310351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-13 12:41:46.310357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-13 12:41:46.310363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-13 12:41:46.322617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-13 12:41:46.322898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-13 12:41:46.322915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-13 12:41:46.322922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.874 [2024-12-13 12:41:46.323090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.874 [2024-12-13 12:41:46.323258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-13 12:41:46.323266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-13 12:41:46.323272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-13 12:41:46.323277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-13 12:41:46.335376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-13 12:41:46.335789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-13 12:41:46.335806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-13 12:41:46.335812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.874 [2024-12-13 12:41:46.335988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.874 [2024-12-13 12:41:46.336160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-13 12:41:46.336168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-13 12:41:46.336174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-13 12:41:46.336180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-13 12:41:46.348234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-13 12:41:46.348582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-13 12:41:46.348599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-13 12:41:46.348606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.874 [2024-12-13 12:41:46.348778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.874 [2024-12-13 12:41:46.348957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-13 12:41:46.348965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-13 12:41:46.348971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-13 12:41:46.348987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-13 12:41:46.361007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-13 12:41:46.361442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-13 12:41:46.361459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-13 12:41:46.361465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.874 [2024-12-13 12:41:46.361633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.874 [2024-12-13 12:41:46.361806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-13 12:41:46.361814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-13 12:41:46.361820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-13 12:41:46.361826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-13 12:41:46.373830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-13 12:41:46.374241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-13 12:41:46.374257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-13 12:41:46.374264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.374432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.374604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.374616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.374622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.374628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.386669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.387001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.387018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.387025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.387192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.387360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.387368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.387374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.387380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.399440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.399915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.399938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.400522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.400784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.400793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.400799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.400805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.412266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.412553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.412569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.412576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.412744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.412918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.412927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.412932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.412941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.425009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.425460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.425506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.425529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.426103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.426494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.426511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.426525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.426538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.440076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.440507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.440574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.441079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.441335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.441346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.441355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.441364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.453100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.453507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.453550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.453572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.454027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.454197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.454205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.454211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.454217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.465899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.466245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.466260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.466267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.466435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.466603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.466611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.466617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.466623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.478708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.479065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.479082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.479089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.479256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.479424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.479432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.479439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.479445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.491574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.491874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.491890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.491897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.875 [2024-12-13 12:41:46.492066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.875 [2024-12-13 12:41:46.492233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.875 [2024-12-13 12:41:46.492241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.875 [2024-12-13 12:41:46.492247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.875 [2024-12-13 12:41:46.492253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.875 [2024-12-13 12:41:46.504331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.875 [2024-12-13 12:41:46.504688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.875 [2024-12-13 12:41:46.504705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.875 [2024-12-13 12:41:46.504712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.876 [2024-12-13 12:41:46.504888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.876 [2024-12-13 12:41:46.505057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.876 [2024-12-13 12:41:46.505065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.876 [2024-12-13 12:41:46.505071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.876 [2024-12-13 12:41:46.505077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.876 [2024-12-13 12:41:46.517135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.876 [2024-12-13 12:41:46.517501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.876 [2024-12-13 12:41:46.517518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.876 [2024-12-13 12:41:46.517524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.876 [2024-12-13 12:41:46.517692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.876 [2024-12-13 12:41:46.517865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.876 [2024-12-13 12:41:46.517873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.876 [2024-12-13 12:41:46.517880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.876 [2024-12-13 12:41:46.517885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.876 [2024-12-13 12:41:46.529909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.876 [2024-12-13 12:41:46.530313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.876 [2024-12-13 12:41:46.530330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.876 [2024-12-13 12:41:46.530337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.876 [2024-12-13 12:41:46.530504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.876 [2024-12-13 12:41:46.530673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.876 [2024-12-13 12:41:46.530681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.876 [2024-12-13 12:41:46.530687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.876 [2024-12-13 12:41:46.530695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.876 [2024-12-13 12:41:46.542871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.876 [2024-12-13 12:41:46.543149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.876 [2024-12-13 12:41:46.543165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.876 [2024-12-13 12:41:46.543172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.876 [2024-12-13 12:41:46.543340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.876 [2024-12-13 12:41:46.543508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.876 [2024-12-13 12:41:46.543519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.876 [2024-12-13 12:41:46.543525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.876 [2024-12-13 12:41:46.543531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.876 [2024-12-13 12:41:46.555841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.876 [2024-12-13 12:41:46.556257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.876 [2024-12-13 12:41:46.556274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.876 [2024-12-13 12:41:46.556281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.876 [2024-12-13 12:41:46.556449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.876 [2024-12-13 12:41:46.556617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.876 [2024-12-13 12:41:46.556625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.876 [2024-12-13 12:41:46.556631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.876 [2024-12-13 12:41:46.556637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.876 [2024-12-13 12:41:46.568860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.876 [2024-12-13 12:41:46.569190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.876 [2024-12-13 12:41:46.569206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:18.876 [2024-12-13 12:41:46.569213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:18.876 [2024-12-13 12:41:46.569381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:18.876 [2024-12-13 12:41:46.569549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.876 [2024-12-13 12:41:46.569557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.876 [2024-12-13 12:41:46.569563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.876 [2024-12-13 12:41:46.569569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.581762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.582122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.582138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.582146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.582314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.582482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.582490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.582496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.582507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.594562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.594953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.595004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.595028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.595555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.595723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.595731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.595737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.595743] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.607538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.607882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.607898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.607906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.608073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.608241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.608249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.608255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.608261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.620342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.620732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.620748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.620754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.620941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.621109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.621117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.621123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.621129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.633175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.633594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.633610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.633617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.633776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.633964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.633973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.633979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.633985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.646049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.646486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.646530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.646553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.647154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.647605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.647613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.647619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.647625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.658873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.137 [2024-12-13 12:41:46.659290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.137 [2024-12-13 12:41:46.659334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.137 [2024-12-13 12:41:46.659357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.137 [2024-12-13 12:41:46.659956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.137 [2024-12-13 12:41:46.660542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-13 12:41:46.660563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-13 12:41:46.660569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-13 12:41:46.660575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.137 [2024-12-13 12:41:46.671729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.672070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.672086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.672093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.672255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.672414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.672422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.672427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.672433] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.684515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.684959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.684999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.685024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.685597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.685757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.685764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.685770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.685775] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.697319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.697712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.697756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.697779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.698298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.698467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.698475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.698481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.698486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.710086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.710502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.710518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.710524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.710683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.710866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.710877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.710884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.710890] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.722932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.723346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.723361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.723368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.723528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.723686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.723694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.723700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.723705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.735774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.736210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.736253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.736275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.736696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.736880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.736888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.736894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.736900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.748561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.749007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.749014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.749187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.749359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.749367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.749374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.749382] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.761446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.761856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.761872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.761878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.762038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.762196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.762204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.762210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.762215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.774311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.774734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.774751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.774757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.774945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.775114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.775122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.775128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.775134] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.787041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.787494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.787512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.138 [2024-12-13 12:41:46.787519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.138 [2024-12-13 12:41:46.787686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.138 [2024-12-13 12:41:46.787880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.138 [2024-12-13 12:41:46.787888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.138 [2024-12-13 12:41:46.787894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.138 [2024-12-13 12:41:46.787901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.138 [2024-12-13 12:41:46.799958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.138 [2024-12-13 12:41:46.800377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.138 [2024-12-13 12:41:46.800421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.139 [2024-12-13 12:41:46.800444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.139 [2024-12-13 12:41:46.800898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.139 [2024-12-13 12:41:46.801068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.139 [2024-12-13 12:41:46.801076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.139 [2024-12-13 12:41:46.801082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.139 [2024-12-13 12:41:46.801088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.139 [2024-12-13 12:41:46.812791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.139 [2024-12-13 12:41:46.813228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.139 [2024-12-13 12:41:46.813244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.139 [2024-12-13 12:41:46.813251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.139 [2024-12-13 12:41:46.813419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.139 [2024-12-13 12:41:46.813586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.139 [2024-12-13 12:41:46.813594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.139 [2024-12-13 12:41:46.813600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.139 [2024-12-13 12:41:46.813606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.139 [2024-12-13 12:41:46.825534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.139 [2024-12-13 12:41:46.825946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.139 [2024-12-13 12:41:46.825962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.139 [2024-12-13 12:41:46.825969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.139 [2024-12-13 12:41:46.826128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.139 [2024-12-13 12:41:46.826286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.139 [2024-12-13 12:41:46.826294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.139 [2024-12-13 12:41:46.826300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.139 [2024-12-13 12:41:46.826305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.400 [2024-12-13 12:41:46.838489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.400 [2024-12-13 12:41:46.838895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.400 [2024-12-13 12:41:46.838912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.400 [2024-12-13 12:41:46.838919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.400 [2024-12-13 12:41:46.839081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.400 [2024-12-13 12:41:46.839240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.400 [2024-12-13 12:41:46.839247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.400 [2024-12-13 12:41:46.839253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.400 [2024-12-13 12:41:46.839258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.400 [2024-12-13 12:41:46.851402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.400 [2024-12-13 12:41:46.851828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.400 [2024-12-13 12:41:46.851844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.400 [2024-12-13 12:41:46.851851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.400 [2024-12-13 12:41:46.852010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.400 [2024-12-13 12:41:46.852169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.400 [2024-12-13 12:41:46.852176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.400 [2024-12-13 12:41:46.852182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.400 [2024-12-13 12:41:46.852188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.400 [2024-12-13 12:41:46.864236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.400 [2024-12-13 12:41:46.864628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.400 [2024-12-13 12:41:46.864644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.400 [2024-12-13 12:41:46.864651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.400 [2024-12-13 12:41:46.864832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.400 [2024-12-13 12:41:46.865000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.400 [2024-12-13 12:41:46.865008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.400 [2024-12-13 12:41:46.865014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.400 [2024-12-13 12:41:46.865021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.400 [2024-12-13 12:41:46.877034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.400 [2024-12-13 12:41:46.877338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.400 [2024-12-13 12:41:46.877354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.400 [2024-12-13 12:41:46.877361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.400 [2024-12-13 12:41:46.877520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.400 [2024-12-13 12:41:46.877678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.400 [2024-12-13 12:41:46.877689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.400 [2024-12-13 12:41:46.877695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.400 [2024-12-13 12:41:46.877700] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.400 [2024-12-13 12:41:46.889862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.400 [2024-12-13 12:41:46.890276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.400 [2024-12-13 12:41:46.890292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.400 [2024-12-13 12:41:46.890298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.400 [2024-12-13 12:41:46.890457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.890616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.890623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.890629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.890634] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.902707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.903139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.903155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.903162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.903330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.903498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.903506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.903512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.903518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.915566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.915958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.915973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.915980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.916140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.916299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.916306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.916312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.916321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.928368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.928776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.928825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.928849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.929433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.930036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.930062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.930083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.930102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.941116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.941526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.941542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.941548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.941707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.941890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.941899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.941905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.941911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.953889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.954234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.954249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.954256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.954415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.954573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.954581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.954586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.954592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.966664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.967093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.967109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.967116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.967284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.967452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.967459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.967465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.967471] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.979547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.979956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.979972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.979979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.980138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.980297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.980304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.980310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.980316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:46.992296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:46.992708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:46.992723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:46.992730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:46.992916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:46.993085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:46.993093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:46.993098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:46.993104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:47.005205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:47.005564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:47.005580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.401 [2024-12-13 12:41:47.005587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.401 [2024-12-13 12:41:47.005759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.401 [2024-12-13 12:41:47.005933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.401 [2024-12-13 12:41:47.005942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.401 [2024-12-13 12:41:47.005948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.401 [2024-12-13 12:41:47.005954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.401 [2024-12-13 12:41:47.018040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.401 [2024-12-13 12:41:47.018450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.401 [2024-12-13 12:41:47.018466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.018472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.018630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.018794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.018802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.018808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.018814] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.402 [2024-12-13 12:41:47.030806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.402 [2024-12-13 12:41:47.031195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.402 [2024-12-13 12:41:47.031211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.031217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.031377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.031535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.031542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.031548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.031554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.402 [2024-12-13 12:41:47.043635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.402 [2024-12-13 12:41:47.044078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.402 [2024-12-13 12:41:47.044095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.044103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.044271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.044438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.044449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.044455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.044461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.402 [2024-12-13 12:41:47.056649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.402 [2024-12-13 12:41:47.057023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.402 [2024-12-13 12:41:47.057040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.057047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.057215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.057383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.057391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.057397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.057402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.402 [2024-12-13 12:41:47.069598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.402 [2024-12-13 12:41:47.070030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.402 [2024-12-13 12:41:47.070047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.070054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.070226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.070398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.070407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.070413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.070419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.402 [2024-12-13 12:41:47.082410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.402 [2024-12-13 12:41:47.082810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.402 [2024-12-13 12:41:47.082855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.082878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.083380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.083542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.083550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.083556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.083565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.402 7670.50 IOPS, 29.96 MiB/s [2024-12-13T11:41:47.102Z] [2024-12-13 12:41:47.095251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.402 [2024-12-13 12:41:47.095709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.402 [2024-12-13 12:41:47.095754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.402 [2024-12-13 12:41:47.095777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.402 [2024-12-13 12:41:47.096354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.402 [2024-12-13 12:41:47.096523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.402 [2024-12-13 12:41:47.096531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.402 [2024-12-13 12:41:47.096537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.402 [2024-12-13 12:41:47.096542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.663 [2024-12-13 12:41:47.108121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.663 [2024-12-13 12:41:47.108532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.663 [2024-12-13 12:41:47.108548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.663 [2024-12-13 12:41:47.108555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.663 [2024-12-13 12:41:47.108723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.663 [2024-12-13 12:41:47.108900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.663 [2024-12-13 12:41:47.108908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.108914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.108920] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.120997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.121414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.121431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.121438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.121606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.121774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.121788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.121795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.121801] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.133818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.134266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.134282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.134289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.134456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.134623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.134632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.134638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.134644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.146677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.147092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.147108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.147115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.147283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.147451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.147460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.147465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.147471] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.159489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.159882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.159927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.159950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.160533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.160759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.160767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.160773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.160779] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.172245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.172676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.172721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.172744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.173351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.173917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.173935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.173949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.173962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.187312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.187820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.187866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.187890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.188473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.188827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.188839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.188848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.188857] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.200206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.200610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.200627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.200634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.200808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.200977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.200985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.200991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.664 [2024-12-13 12:41:47.200997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.664 [2024-12-13 12:41:47.213196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.664 [2024-12-13 12:41:47.213610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.664 [2024-12-13 12:41:47.213626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.664 [2024-12-13 12:41:47.213634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.664 [2024-12-13 12:41:47.213808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.664 [2024-12-13 12:41:47.213977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.664 [2024-12-13 12:41:47.213988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.664 [2024-12-13 12:41:47.213994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.214000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.225969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.226400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.226416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.226423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.226591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.226760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.226768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.226774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.226789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.238724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.239143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.239160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.239167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.239326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.239486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.239494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.239500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.239505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.251643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.252096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.252103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.252270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.252438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.252446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.252452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.252461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.264392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.264805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.264821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.264828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.264995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.265163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.265171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.265177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.265183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.277132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.277577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.277622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.277645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.278147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.278322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.278331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.278337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.278343] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.289988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.290392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.290408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.290415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.290573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.290733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.290740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.290746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.290752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.302837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.303198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.303214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.303221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.303389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.303557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.303565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.303572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.303578] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.315779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.316195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.316239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.316262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.316855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.317060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.317068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.665 [2024-12-13 12:41:47.317074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.665 [2024-12-13 12:41:47.317080] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.665 [2024-12-13 12:41:47.328778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.665 [2024-12-13 12:41:47.329176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.665 [2024-12-13 12:41:47.329192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.665 [2024-12-13 12:41:47.329199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.665 [2024-12-13 12:41:47.329366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.665 [2024-12-13 12:41:47.329534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.665 [2024-12-13 12:41:47.329542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.666 [2024-12-13 12:41:47.329548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.666 [2024-12-13 12:41:47.329554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.666 [2024-12-13 12:41:47.341624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.666 [2024-12-13 12:41:47.342034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.666 [2024-12-13 12:41:47.342051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.666 [2024-12-13 12:41:47.342058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.666 [2024-12-13 12:41:47.342229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.666 [2024-12-13 12:41:47.342397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.666 [2024-12-13 12:41:47.342405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.666 [2024-12-13 12:41:47.342410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.666 [2024-12-13 12:41:47.342416] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.666 [2024-12-13 12:41:47.354449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.666 [2024-12-13 12:41:47.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.666 [2024-12-13 12:41:47.354876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.666 [2024-12-13 12:41:47.354884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.666 [2024-12-13 12:41:47.355052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.666 [2024-12-13 12:41:47.355220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.666 [2024-12-13 12:41:47.355228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.666 [2024-12-13 12:41:47.355234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.666 [2024-12-13 12:41:47.355240] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.927 [2024-12-13 12:41:47.367335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.927 [2024-12-13 12:41:47.367740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.927 [2024-12-13 12:41:47.367799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.927 [2024-12-13 12:41:47.367824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.927 [2024-12-13 12:41:47.368222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.927 [2024-12-13 12:41:47.368391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.927 [2024-12-13 12:41:47.368399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.927 [2024-12-13 12:41:47.368405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.927 [2024-12-13 12:41:47.368411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.927 [2024-12-13 12:41:47.380124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.927 [2024-12-13 12:41:47.380534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.927 [2024-12-13 12:41:47.380550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.927 [2024-12-13 12:41:47.380557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.927 [2024-12-13 12:41:47.380725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.927 [2024-12-13 12:41:47.380900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.927 [2024-12-13 12:41:47.380912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.927 [2024-12-13 12:41:47.380918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.927 [2024-12-13 12:41:47.380923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.927 [2024-12-13 12:41:47.392985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.927 [2024-12-13 12:41:47.393379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.927 [2024-12-13 12:41:47.393422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.927 [2024-12-13 12:41:47.393444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.927 [2024-12-13 12:41:47.394043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.927 [2024-12-13 12:41:47.394223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.927 [2024-12-13 12:41:47.394231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.927 [2024-12-13 12:41:47.394236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.927 [2024-12-13 12:41:47.394242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.927 [2024-12-13 12:41:47.405837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.927 [2024-12-13 12:41:47.406232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.927 [2024-12-13 12:41:47.406248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.927 [2024-12-13 12:41:47.406255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.927 [2024-12-13 12:41:47.406422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.927 [2024-12-13 12:41:47.406590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.927 [2024-12-13 12:41:47.406598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.927 [2024-12-13 12:41:47.406604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.927 [2024-12-13 12:41:47.406610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.927 [2024-12-13 12:41:47.418682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.927 [2024-12-13 12:41:47.419095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.927 [2024-12-13 12:41:47.419112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.927 [2024-12-13 12:41:47.419119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.927 [2024-12-13 12:41:47.419287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.927 [2024-12-13 12:41:47.419454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.419462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.419469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.419478] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.431541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.431960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.431976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.431983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.432150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.432317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.432325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.432331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.432337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.444405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.444794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.444810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.444817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.444985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.445175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.445183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.445189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.445195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.457259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.457647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.457663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.457670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.457845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.458014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.458021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.458027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.458033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.470092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.470494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.470509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.470516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.470674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.470841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.470849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.470854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.470860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.482841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.483225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.483241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.483247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.483406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.483565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.483573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.483578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.483584] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.495647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.496061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.496077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.496084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.496252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.496419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.496427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.496433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.496439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.508490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.508921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.508939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.508946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.509118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.509285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.509293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.509300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.509306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.521217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.521516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.521532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.521539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.521697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.521862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.521870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.521876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.521881] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.533985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.534390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.534433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.534456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.535060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.535599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.535606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.928 [2024-12-13 12:41:47.535612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.928 [2024-12-13 12:41:47.535617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.928 [2024-12-13 12:41:47.546783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.928 [2024-12-13 12:41:47.547175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.928 [2024-12-13 12:41:47.547191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.928 [2024-12-13 12:41:47.547198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.928 [2024-12-13 12:41:47.547365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.928 [2024-12-13 12:41:47.547533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.928 [2024-12-13 12:41:47.547544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.547550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.547556] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.929 [2024-12-13 12:41:47.559704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.929 [2024-12-13 12:41:47.560112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.929 [2024-12-13 12:41:47.560129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.929 [2024-12-13 12:41:47.560136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.929 [2024-12-13 12:41:47.560309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.929 [2024-12-13 12:41:47.560481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.929 [2024-12-13 12:41:47.560490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.560496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.560502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.929 [2024-12-13 12:41:47.572651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.929 [2024-12-13 12:41:47.573111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.929 [2024-12-13 12:41:47.573130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.929 [2024-12-13 12:41:47.573137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.929 [2024-12-13 12:41:47.573312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.929 [2024-12-13 12:41:47.573484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.929 [2024-12-13 12:41:47.573492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.573498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.573505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.929 [2024-12-13 12:41:47.585583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.929 [2024-12-13 12:41:47.586005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.929 [2024-12-13 12:41:47.586023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.929 [2024-12-13 12:41:47.586030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.929 [2024-12-13 12:41:47.586198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.929 [2024-12-13 12:41:47.586366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.929 [2024-12-13 12:41:47.586374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.586380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.586393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.929 [2024-12-13 12:41:47.598369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.929 [2024-12-13 12:41:47.598799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.929 [2024-12-13 12:41:47.598816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.929 [2024-12-13 12:41:47.598823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.929 [2024-12-13 12:41:47.598999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.929 [2024-12-13 12:41:47.599158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.929 [2024-12-13 12:41:47.599166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.599172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.599177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.929 [2024-12-13 12:41:47.611135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.929 [2024-12-13 12:41:47.611587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.929 [2024-12-13 12:41:47.611603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.929 [2024-12-13 12:41:47.611610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.929 [2024-12-13 12:41:47.611777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.929 [2024-12-13 12:41:47.611950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.929 [2024-12-13 12:41:47.611959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.611965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.611971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.929 [2024-12-13 12:41:47.624213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.929 [2024-12-13 12:41:47.624629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.929 [2024-12-13 12:41:47.624646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:19.929 [2024-12-13 12:41:47.624654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:19.929 [2024-12-13 12:41:47.624831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:19.929 [2024-12-13 12:41:47.625004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.929 [2024-12-13 12:41:47.625013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.929 [2024-12-13 12:41:47.625019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.929 [2024-12-13 12:41:47.625025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.190 [2024-12-13 12:41:47.637067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.190 [2024-12-13 12:41:47.637436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.190 [2024-12-13 12:41:47.637452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.190 [2024-12-13 12:41:47.637459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.190 [2024-12-13 12:41:47.637627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.190 [2024-12-13 12:41:47.637799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.190 [2024-12-13 12:41:47.637807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.190 [2024-12-13 12:41:47.637814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.190 [2024-12-13 12:41:47.637820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.190 [2024-12-13 12:41:47.649938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.190 [2024-12-13 12:41:47.650307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.190 [2024-12-13 12:41:47.650323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.190 [2024-12-13 12:41:47.650330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.190 [2024-12-13 12:41:47.650503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.190 [2024-12-13 12:41:47.650675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.190 [2024-12-13 12:41:47.650684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.190 [2024-12-13 12:41:47.650690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.190 [2024-12-13 12:41:47.650696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.190 [2024-12-13 12:41:47.662669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.190 [2024-12-13 12:41:47.662962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.190 [2024-12-13 12:41:47.662978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.190 [2024-12-13 12:41:47.662985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.190 [2024-12-13 12:41:47.663153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.190 [2024-12-13 12:41:47.663320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.190 [2024-12-13 12:41:47.663328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.190 [2024-12-13 12:41:47.663335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.190 [2024-12-13 12:41:47.663341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.190 [2024-12-13 12:41:47.675423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.190 [2024-12-13 12:41:47.675837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.190 [2024-12-13 12:41:47.675855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.190 [2024-12-13 12:41:47.675862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.190 [2024-12-13 12:41:47.676039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.190 [2024-12-13 12:41:47.676203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.190 [2024-12-13 12:41:47.676211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.190 [2024-12-13 12:41:47.676216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.190 [2024-12-13 12:41:47.676222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.190 [2024-12-13 12:41:47.688253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.190 [2024-12-13 12:41:47.688587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.190 [2024-12-13 12:41:47.688630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.190 [2024-12-13 12:41:47.688652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.190 [2024-12-13 12:41:47.689129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.190 [2024-12-13 12:41:47.689299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.190 [2024-12-13 12:41:47.689307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.190 [2024-12-13 12:41:47.689313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.689319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.701089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.701376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.701392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.701399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.701567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.701735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.701743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.701749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.701755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.713830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.714132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.714148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.714155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.714322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.714490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.714501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.714507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.714513] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.726684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.727031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.727047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.727054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.727223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.727390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.727398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.727404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.727410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.739486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.739934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.739979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.740001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.740585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.741171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.741180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.741185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.741191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.752396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.752801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.752817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.752824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.752991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.753159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.753167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.753173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.753182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.765374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.765677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.765721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.765744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.766269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.766438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.766446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.766451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.766458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.778248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.778710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.778727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.778734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.778907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.779077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.779085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.779091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.779097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.791171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.791605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.791623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.791630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.791802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.791972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.791980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.791986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.791992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.803915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.804212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.804227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.804234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.804401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.804569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.804576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.804582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.804588] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.191 [2024-12-13 12:41:47.816785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.191 [2024-12-13 12:41:47.817156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.191 [2024-12-13 12:41:47.817173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.191 [2024-12-13 12:41:47.817180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.191 [2024-12-13 12:41:47.817348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.191 [2024-12-13 12:41:47.817516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.191 [2024-12-13 12:41:47.817525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.191 [2024-12-13 12:41:47.817531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.191 [2024-12-13 12:41:47.817538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.192 [2024-12-13 12:41:47.829695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.192 [2024-12-13 12:41:47.830042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.192 [2024-12-13 12:41:47.830085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.192 [2024-12-13 12:41:47.830108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.192 [2024-12-13 12:41:47.830692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.192 [2024-12-13 12:41:47.831288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.192 [2024-12-13 12:41:47.831297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.192 [2024-12-13 12:41:47.831303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.192 [2024-12-13 12:41:47.831309] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.192 [2024-12-13 12:41:47.842743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.192 [2024-12-13 12:41:47.843100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.192 [2024-12-13 12:41:47.843133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.192 [2024-12-13 12:41:47.843157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.192 [2024-12-13 12:41:47.843747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.192 [2024-12-13 12:41:47.844343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.192 [2024-12-13 12:41:47.844369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.192 [2024-12-13 12:41:47.844400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.192 [2024-12-13 12:41:47.844406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.192 [2024-12-13 12:41:47.855580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.192 [2024-12-13 12:41:47.855934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.192 [2024-12-13 12:41:47.855950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.192 [2024-12-13 12:41:47.855957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.192 [2024-12-13 12:41:47.856124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.192 [2024-12-13 12:41:47.856292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.192 [2024-12-13 12:41:47.856300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.192 [2024-12-13 12:41:47.856306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.192 [2024-12-13 12:41:47.856312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.192 [2024-12-13 12:41:47.868376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.192 [2024-12-13 12:41:47.868788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.192 [2024-12-13 12:41:47.868805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.192 [2024-12-13 12:41:47.868812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.192 [2024-12-13 12:41:47.868980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.192 [2024-12-13 12:41:47.869148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.192 [2024-12-13 12:41:47.869156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.192 [2024-12-13 12:41:47.869162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.192 [2024-12-13 12:41:47.869168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.192 [2024-12-13 12:41:47.881193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.192 [2024-12-13 12:41:47.881638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.192 [2024-12-13 12:41:47.881677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.192 [2024-12-13 12:41:47.881701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.192 [2024-12-13 12:41:47.882234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.192 [2024-12-13 12:41:47.882403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.192 [2024-12-13 12:41:47.882413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.192 [2024-12-13 12:41:47.882419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.192 [2024-12-13 12:41:47.882425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.894099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.894534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.894579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.894602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.895197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.453 [2024-12-13 12:41:47.895419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.453 [2024-12-13 12:41:47.895427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.453 [2024-12-13 12:41:47.895432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.453 [2024-12-13 12:41:47.895438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.906913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.907285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.907330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.907352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.907947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.453 [2024-12-13 12:41:47.908517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.453 [2024-12-13 12:41:47.908526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.453 [2024-12-13 12:41:47.908533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.453 [2024-12-13 12:41:47.908540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.919722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.920098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.920115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.920122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.920290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.453 [2024-12-13 12:41:47.920458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.453 [2024-12-13 12:41:47.920466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.453 [2024-12-13 12:41:47.920472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.453 [2024-12-13 12:41:47.920482] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.932498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.932923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.932940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.932947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.933114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.453 [2024-12-13 12:41:47.933282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.453 [2024-12-13 12:41:47.933290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.453 [2024-12-13 12:41:47.933296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.453 [2024-12-13 12:41:47.933302] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.945374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.945806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.945823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.945830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.946005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.453 [2024-12-13 12:41:47.946164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.453 [2024-12-13 12:41:47.946172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.453 [2024-12-13 12:41:47.946178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.453 [2024-12-13 12:41:47.946183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.958190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.958583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.958599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.958606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.958774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.453 [2024-12-13 12:41:47.958947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.453 [2024-12-13 12:41:47.958955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.453 [2024-12-13 12:41:47.958961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.453 [2024-12-13 12:41:47.958967] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.453 [2024-12-13 12:41:47.971044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.453 [2024-12-13 12:41:47.971349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.453 [2024-12-13 12:41:47.971365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.453 [2024-12-13 12:41:47.971371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.453 [2024-12-13 12:41:47.971539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:47.971707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:47.971714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:47.971720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:47.971726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:47.983816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:47.984176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:47.984193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:47.984200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:47.984367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:47.984535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:47.984543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:47.984549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:47.984555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:47.996608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:47.996974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:47.996990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:47.996998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:47.997165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:47.997333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:47.997341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:47.997347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:47.997353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.009424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.009843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.009860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.009867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.010045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.010204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.010211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.010217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.010222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.022269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.022699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.022715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.022722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.022894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.023062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.023070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.023076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.023082] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.035149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.035508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.035525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.035532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.035700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.035872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.035881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.035887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.035892] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.048056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.048398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.048444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.048467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.048960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.049130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.049141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.049147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.049153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.060956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.061376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.061392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.061399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.061567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.061735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.061743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.061749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.061755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.073895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.074308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.074325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.074332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.074500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.074667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.074677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.074682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.074688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 [2024-12-13 12:41:48.086898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.087239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.087255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.087262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.087429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.454 [2024-12-13 12:41:48.087598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.454 [2024-12-13 12:41:48.087606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.454 [2024-12-13 12:41:48.087612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.454 [2024-12-13 12:41:48.087621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.454 6136.40 IOPS, 23.97 MiB/s [2024-12-13T11:41:48.154Z] [2024-12-13 12:41:48.099963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.454 [2024-12-13 12:41:48.100267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.454 [2024-12-13 12:41:48.100284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.454 [2024-12-13 12:41:48.100291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.454 [2024-12-13 12:41:48.100464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.455 [2024-12-13 12:41:48.100637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.455 [2024-12-13 12:41:48.100645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.455 [2024-12-13 12:41:48.100651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.455 [2024-12-13 12:41:48.100657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.455 [2024-12-13 12:41:48.112812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.455 [2024-12-13 12:41:48.113228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.455 [2024-12-13 12:41:48.113244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.455 [2024-12-13 12:41:48.113251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.455 [2024-12-13 12:41:48.113418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.455 [2024-12-13 12:41:48.113586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.455 [2024-12-13 12:41:48.113594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.455 [2024-12-13 12:41:48.113600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.455 [2024-12-13 12:41:48.113606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.455 [2024-12-13 12:41:48.125648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.455 [2024-12-13 12:41:48.126105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.455 [2024-12-13 12:41:48.126150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.455 [2024-12-13 12:41:48.126173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.455 [2024-12-13 12:41:48.126756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.455 [2024-12-13 12:41:48.127223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.455 [2024-12-13 12:41:48.127231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.455 [2024-12-13 12:41:48.127237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.455 [2024-12-13 12:41:48.127243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.455 [2024-12-13 12:41:48.138389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.455 [2024-12-13 12:41:48.138804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.455 [2024-12-13 12:41:48.138820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.455 [2024-12-13 12:41:48.138827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.455 [2024-12-13 12:41:48.138986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.455 [2024-12-13 12:41:48.139145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.455 [2024-12-13 12:41:48.139153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.455 [2024-12-13 12:41:48.139158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.455 [2024-12-13 12:41:48.139164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.151411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.151794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.151813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.151821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.151996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.152170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.152178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.152186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.152193] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.164197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.164612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.164629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.164637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.164811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.164980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.164988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.164994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.165000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.177056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.177415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.177431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.177438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.177609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.177777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.177792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.177798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.177804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.189866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.190282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.190298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.190305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.190472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.190640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.190648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.190654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.190660] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.202725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.203163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.203179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.203186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.203353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.203521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.203529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.203536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.203541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.215550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.215958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.215975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.215981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.216140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.216298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.216312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.216317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.216323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.228382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.228812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.228857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.228879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.229306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.229465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.229473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.229479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.229484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.241241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.241655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.241670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.241676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.241859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.242027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.242035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.242040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.242046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.254095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.716 [2024-12-13 12:41:48.254426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.716 [2024-12-13 12:41:48.254442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.716 [2024-12-13 12:41:48.254449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.716 [2024-12-13 12:41:48.254607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.716 [2024-12-13 12:41:48.254765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.716 [2024-12-13 12:41:48.254773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.716 [2024-12-13 12:41:48.254779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.716 [2024-12-13 12:41:48.254794] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.716 [2024-12-13 12:41:48.266872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.267311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.267351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.267376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.267973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.268464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.268472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.268478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.268483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.279642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.280072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.280089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.280096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.280264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.280432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.280439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.280445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.280451] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.292572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.293014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.293059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.293082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.293535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.293704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.293712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.293718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.293724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.305345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.305693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.305708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.305716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.305889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.306058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.306066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.306073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.306080] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.318207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.318633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.318678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.318701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.319228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.319398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.319406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.319412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.319418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.331020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.331436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.331453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.331460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.331627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.331800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.331809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.331815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.331822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.344003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.344395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.344411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.344419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.344590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.344758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.344765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.344771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.344777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.356960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.357306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.357323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.357330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.357498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.357666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.357674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.357680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.357686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.369726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.370126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.370143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.370150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.370318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.370486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.370494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.370499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.370505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.382664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.383107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.717 [2024-12-13 12:41:48.383152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.717 [2024-12-13 12:41:48.383174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.717 [2024-12-13 12:41:48.383661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.717 [2024-12-13 12:41:48.383835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.717 [2024-12-13 12:41:48.383846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.717 [2024-12-13 12:41:48.383852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.717 [2024-12-13 12:41:48.383859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.717 [2024-12-13 12:41:48.395463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.717 [2024-12-13 12:41:48.395874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.718 [2024-12-13 12:41:48.395891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.718 [2024-12-13 12:41:48.395897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.718 [2024-12-13 12:41:48.396056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.718 [2024-12-13 12:41:48.396214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.718 [2024-12-13 12:41:48.396222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.718 [2024-12-13 12:41:48.396227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.718 [2024-12-13 12:41:48.396233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.718 [2024-12-13 12:41:48.408431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.718 [2024-12-13 12:41:48.408808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.718 [2024-12-13 12:41:48.408855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.718 [2024-12-13 12:41:48.408878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.718 [2024-12-13 12:41:48.409462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.718 [2024-12-13 12:41:48.409776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.718 [2024-12-13 12:41:48.409790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.718 [2024-12-13 12:41:48.409796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.718 [2024-12-13 12:41:48.409802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.978 [2024-12-13 12:41:48.421248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.978 [2024-12-13 12:41:48.421637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.978 [2024-12-13 12:41:48.421653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.978 [2024-12-13 12:41:48.421659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.978 [2024-12-13 12:41:48.421841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.978 [2024-12-13 12:41:48.422010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.978 [2024-12-13 12:41:48.422018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.978 [2024-12-13 12:41:48.422025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.978 [2024-12-13 12:41:48.422034] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.978 [2024-12-13 12:41:48.434081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.978 [2024-12-13 12:41:48.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.978 [2024-12-13 12:41:48.434545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.978 [2024-12-13 12:41:48.434568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.978 [2024-12-13 12:41:48.435167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.435759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.435802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.435825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.435845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.446871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.447296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.447340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.447363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.447834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.448003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.448011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.448017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.448023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.459737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.460154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.460170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.460177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.460336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.460494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.460502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.460507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.460513] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.472476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.472907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.472914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.473072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.473230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.473238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.473244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.473249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.485295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.485705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.485720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.485727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.485915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.486084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.486092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.486098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.486104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.498114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.498508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.498525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.498531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.498691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.498874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.498883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.498889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.498894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.510945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.511387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.511403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.511410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.511581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.511749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.511757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.511763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.511768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.523672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.524090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.524135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.524158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.524741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.525200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.525208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.525214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.525220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.536541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.536970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.537016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.537038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.537625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.538024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.538042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.538056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.538069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.979 [2024-12-13 12:41:48.551436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.979 [2024-12-13 12:41:48.551954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.979 [2024-12-13 12:41:48.551975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.979 [2024-12-13 12:41:48.551986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.979 [2024-12-13 12:41:48.552240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.979 [2024-12-13 12:41:48.552494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.979 [2024-12-13 12:41:48.552510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.979 [2024-12-13 12:41:48.552519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.979 [2024-12-13 12:41:48.552528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.564478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.564837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.564853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.564861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.565034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.565206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.565214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.565220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.565226] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.577336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.577747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.577770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.577957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.578125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.578133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.578139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.578145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.590078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.590536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.590582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.590607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.591155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.591340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.591348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.591355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.591364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.603088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.603469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.603485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.603492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.603660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.603832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.603841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.603847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.603853] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.616017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.616425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.616441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.616448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.616616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.616791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.616800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.616806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.616812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.628784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.629191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.629207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.629213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.629373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.629531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.629538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.629544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.629550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.641618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.642070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.642113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.642136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.642719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.643277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.643286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.643291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.643297] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.654465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.654855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.654872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.654878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.655037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.655196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.655204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.655209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.655215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.980 [2024-12-13 12:41:48.667259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.980 [2024-12-13 12:41:48.667667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.980 [2024-12-13 12:41:48.667683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:20.980 [2024-12-13 12:41:48.667690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:20.980 [2024-12-13 12:41:48.667872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:20.980 [2024-12-13 12:41:48.668040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.980 [2024-12-13 12:41:48.668048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.980 [2024-12-13 12:41:48.668054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.980 [2024-12-13 12:41:48.668060] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.241 [2024-12-13 12:41:48.680095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.241 [2024-12-13 12:41:48.680462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.241 [2024-12-13 12:41:48.680478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.241 [2024-12-13 12:41:48.680484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.241 [2024-12-13 12:41:48.680646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.241 [2024-12-13 12:41:48.680820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.241 [2024-12-13 12:41:48.680828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.680834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.680840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.693054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.693493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.693530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.693555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.694146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.694316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.694324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.694330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.694336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.705787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.706222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.706238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.706245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.706413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.706580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.706588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.706594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.706600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.718654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.719105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.719149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.719172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.719585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.719753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.719764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.719770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.719776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.733718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.734253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.734297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.734320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.734774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.735037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.735048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.735057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.735066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 535263 Killed "${NVMF_APP[@]}" "$@" 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.242 [2024-12-13 12:41:48.746733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.747162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.747178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.747185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.747358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.747531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.747539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.747545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.747551] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=536419 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 536419 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 536419 ']' 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.242 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.242 [2024-12-13 12:41:48.759784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.760216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.760231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.760238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.760411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.760584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.760592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.760599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.760605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.772818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.773248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.773266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.773273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.773447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.773620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.773629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.773635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.773641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.785883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.242 [2024-12-13 12:41:48.786310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.242 [2024-12-13 12:41:48.786328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.242 [2024-12-13 12:41:48.786335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.242 [2024-12-13 12:41:48.786510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.242 [2024-12-13 12:41:48.786683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.242 [2024-12-13 12:41:48.786691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.242 [2024-12-13 12:41:48.786701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.242 [2024-12-13 12:41:48.786708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.242 [2024-12-13 12:41:48.798166] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:21.242 [2024-12-13 12:41:48.798203] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.243 [2024-12-13 12:41:48.798857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.799263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.799281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.799288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.799461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.799635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.799643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.799651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.799658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.811907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.812354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.812372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.812380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.812553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.812726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.812734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.812741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.812747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.824953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.825389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.825406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.825413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.825586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.825759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.825767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.825777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.825789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.837934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.838357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.838374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.838381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.838555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.838727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.838735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.838742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.838748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.850839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.851179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.851196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.851204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.851377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.851550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.851558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.851564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.851571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.863946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.864351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.864368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.864375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.864547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.864720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.864728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.864734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.864740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.875648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:21.243 [2024-12-13 12:41:48.876972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.877399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.877416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.877423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.877597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.877770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.877779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.877790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.877797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.890020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.890485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.890504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.890511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.890685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.890864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.890873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.890879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.890886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.897711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.243 [2024-12-13 12:41:48.897735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.243 [2024-12-13 12:41:48.897742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.243 [2024-12-13 12:41:48.897748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.243 [2024-12-13 12:41:48.897752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.243 [2024-12-13 12:41:48.899022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:21.243 [2024-12-13 12:41:48.899131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.243 [2024-12-13 12:41:48.899133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:21.243 [2024-12-13 12:41:48.903150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.903516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.903537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.243 [2024-12-13 12:41:48.903546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.243 [2024-12-13 12:41:48.903727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.243 [2024-12-13 12:41:48.903907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.243 [2024-12-13 12:41:48.903916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.243 [2024-12-13 12:41:48.903924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.243 [2024-12-13 12:41:48.903930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.243 [2024-12-13 12:41:48.916172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.243 [2024-12-13 12:41:48.916633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.243 [2024-12-13 12:41:48.916655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.244 [2024-12-13 12:41:48.916664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.244 [2024-12-13 12:41:48.916843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.244 [2024-12-13 12:41:48.917018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.244 [2024-12-13 12:41:48.917027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.244 [2024-12-13 12:41:48.917034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.244 [2024-12-13 12:41:48.917041] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.244 [2024-12-13 12:41:48.929283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.244 [2024-12-13 12:41:48.929715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.244 [2024-12-13 12:41:48.929736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.244 [2024-12-13 12:41:48.929744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.244 [2024-12-13 12:41:48.929926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.244 [2024-12-13 12:41:48.930102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.244 [2024-12-13 12:41:48.930110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.244 [2024-12-13 12:41:48.930118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.244 [2024-12-13 12:41:48.930125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 [2024-12-13 12:41:48.942365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:48.942732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:48.942752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:48.942760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:48.942941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:48.943116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:48.943130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:48.943137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:48.943144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 [2024-12-13 12:41:48.955383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:48.955811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:48.955833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:48.955843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:48.956017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:48.956192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:48.956201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:48.956208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:48.956215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 [2024-12-13 12:41:48.968426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:48.968779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:48.968801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:48.968809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:48.968983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:48.969158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:48.969166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:48.969173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:48.969179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 [2024-12-13 12:41:48.981552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:48.981979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:48.981997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:48.982004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:48.982177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:48.982350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:48.982358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:48.982364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:48.982371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.504 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:21.504 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:21.504 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:21.504 12:41:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.504 [2024-12-13 12:41:48.994624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:48.994981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:48.994998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:48.995005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:48.995177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:48.995350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:48.995358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:48.995365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:48.995374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 [2024-12-13 12:41:49.007609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:49.007973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:49.007990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:49.007998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:49.008171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:49.008344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:49.008355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:49.008363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:49.008371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 [2024-12-13 12:41:49.020608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:49.020894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:49.020911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:49.020919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:49.021091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.504 [2024-12-13 12:41:49.021264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.504 [2024-12-13 12:41:49.021273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.504 [2024-12-13 12:41:49.021285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.504 [2024-12-13 12:41:49.021292] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.504 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:21.504 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:21.504 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.504 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.504 [2024-12-13 12:41:49.029600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.504 [2024-12-13 12:41:49.033668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.504 [2024-12-13 12:41:49.033985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.504 [2024-12-13 12:41:49.034002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.504 [2024-12-13 12:41:49.034010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.504 [2024-12-13 12:41:49.034182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.505 [2024-12-13 12:41:49.034354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.505 [2024-12-13 12:41:49.034363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.505 [2024-12-13 12:41:49.034369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.505 [2024-12-13 12:41:49.034375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.505 [2024-12-13 12:41:49.046788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.505 [2024-12-13 12:41:49.047149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.505 [2024-12-13 12:41:49.047167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.505 [2024-12-13 12:41:49.047174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.505 [2024-12-13 12:41:49.047347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.505 [2024-12-13 12:41:49.047521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.505 [2024-12-13 12:41:49.047529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.505 [2024-12-13 12:41:49.047535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.505 [2024-12-13 12:41:49.047542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.505 [2024-12-13 12:41:49.059786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.505 [2024-12-13 12:41:49.060055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.505 [2024-12-13 12:41:49.060072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.505 [2024-12-13 12:41:49.060083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.505 [2024-12-13 12:41:49.060255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.505 [2024-12-13 12:41:49.060428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.505 [2024-12-13 12:41:49.060436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.505 [2024-12-13 12:41:49.060442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.505 [2024-12-13 12:41:49.060448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.505 [2024-12-13 12:41:49.072846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.505 [2024-12-13 12:41:49.073182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.505 [2024-12-13 12:41:49.073199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.505 [2024-12-13 12:41:49.073207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.505 [2024-12-13 12:41:49.073380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.505 [2024-12-13 12:41:49.073553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.505 [2024-12-13 12:41:49.073561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.505 [2024-12-13 12:41:49.073568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.505 [2024-12-13 12:41:49.073574] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.505 Malloc0 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.505 [2024-12-13 12:41:49.085827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.505 [2024-12-13 12:41:49.086109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:21.505 [2024-12-13 12:41:49.086126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fbcf0 with addr=10.0.0.2, port=4420 00:36:21.505 [2024-12-13 12:41:49.086134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fbcf0 is same with the state(6) to be set 00:36:21.505 [2024-12-13 12:41:49.086306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fbcf0 (9): Bad file descriptor 00:36:21.505 [2024-12-13 12:41:49.086479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:21.505 [2024-12-13 12:41:49.086488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:21.505 [2024-12-13 12:41:49.086494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:21.505 [2024-12-13 12:41:49.086500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.505 5113.67 IOPS, 19.98 MiB/s [2024-12-13T11:41:49.205Z] 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.505 [2024-12-13 12:41:49.098867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.505 [2024-12-13 12:41:49.098877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.505 12:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 535514 00:36:21.505 [2024-12-13 12:41:49.125662] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:23.822 5929.71 IOPS, 23.16 MiB/s [2024-12-13T11:41:52.460Z] 6622.38 IOPS, 25.87 MiB/s [2024-12-13T11:41:53.398Z] 7176.89 IOPS, 28.03 MiB/s [2024-12-13T11:41:54.336Z] 7581.30 IOPS, 29.61 MiB/s [2024-12-13T11:41:55.355Z] 7947.09 IOPS, 31.04 MiB/s [2024-12-13T11:41:56.321Z] 8230.08 IOPS, 32.15 MiB/s [2024-12-13T11:41:57.276Z] 8489.92 IOPS, 33.16 MiB/s [2024-12-13T11:41:58.254Z] 8700.71 IOPS, 33.99 MiB/s [2024-12-13T11:41:58.254Z] 8880.40 IOPS, 34.69 MiB/s 00:36:30.554 Latency(us) 00:36:30.554 [2024-12-13T11:41:58.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.554 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:30.554 Verification LBA range: start 0x0 length 0x4000 00:36:30.554 Nvme1n1 : 15.01 8878.57 34.68 11046.55 0.00 6404.42 434.96 15416.56 00:36:30.554 [2024-12-13T11:41:58.254Z] =================================================================================================================== 00:36:30.554 [2024-12-13T11:41:58.254Z] Total : 8878.57 34.68 11046.55 0.00 6404.42 434.96 15416.56 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.827 rmmod nvme_tcp 00:36:30.827 rmmod nvme_fabrics 00:36:30.827 rmmod nvme_keyring 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 536419 ']' 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 536419 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 536419 ']' 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 536419 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536419 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536419' 00:36:30.827 killing process with pid 536419 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 536419 00:36:30.827 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 536419 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.101 12:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.106 00:36:33.106 real 0m25.956s 00:36:33.106 user 1m0.214s 00:36:33.106 sys 0m6.722s 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:33.106 ************************************ 00:36:33.106 END TEST nvmf_bdevperf 00:36:33.106 ************************************ 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.106 ************************************ 00:36:33.106 START TEST nvmf_target_disconnect 00:36:33.106 ************************************ 00:36:33.106 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:33.383 * Looking for test storage... 00:36:33.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.383 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.384 --rc genhtml_branch_coverage=1 00:36:33.384 --rc genhtml_function_coverage=1 00:36:33.384 --rc genhtml_legend=1 00:36:33.384 --rc geninfo_all_blocks=1 00:36:33.384 --rc geninfo_unexecuted_blocks=1 00:36:33.384 00:36:33.384 ' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.384 --rc genhtml_branch_coverage=1 00:36:33.384 --rc genhtml_function_coverage=1 00:36:33.384 --rc genhtml_legend=1 00:36:33.384 --rc geninfo_all_blocks=1 00:36:33.384 --rc geninfo_unexecuted_blocks=1 00:36:33.384 00:36:33.384 ' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.384 --rc genhtml_branch_coverage=1 00:36:33.384 --rc genhtml_function_coverage=1 00:36:33.384 --rc genhtml_legend=1 00:36:33.384 --rc geninfo_all_blocks=1 00:36:33.384 --rc geninfo_unexecuted_blocks=1 00:36:33.384 00:36:33.384 ' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:33.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.384 --rc genhtml_branch_coverage=1 00:36:33.384 --rc genhtml_function_coverage=1 00:36:33.384 --rc genhtml_legend=1 00:36:33.384 --rc geninfo_all_blocks=1 00:36:33.384 --rc geninfo_unexecuted_blocks=1 00:36:33.384 00:36:33.384 ' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:33.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:33.384 12:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.981 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:39.982 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:39.982 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:39.982 Found net devices under 0000:af:00.0: cvl_0_0 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:39.982 Found net devices under 0000:af:00.1: cvl_0_1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:36:39.982 00:36:39.982 --- 10.0.0.2 ping statistics --- 00:36:39.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.982 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:36:39.982 00:36:39.982 --- 10.0.0.1 ping statistics --- 00:36:39.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.982 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.982 ************************************ 00:36:39.982 START TEST nvmf_target_disconnect_tc1 00:36:39.982 ************************************ 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.982 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:39.983 12:42:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.983 [2024-12-13 12:42:07.013559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.983 [2024-12-13 12:42:07.013668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c1590 with addr=10.0.0.2, port=4420 00:36:39.983 [2024-12-13 12:42:07.013732] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:39.983 [2024-12-13 12:42:07.013766] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:39.983 [2024-12-13 12:42:07.013803] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:39.983 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:39.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:39.983 Initializing NVMe Controllers 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:39.983 00:36:39.983 real 0m0.117s 00:36:39.983 user 0m0.054s 00:36:39.983 sys 0m0.063s 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 ************************************ 00:36:39.983 END TEST nvmf_target_disconnect_tc1 00:36:39.983 ************************************ 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 ************************************ 00:36:39.983 START TEST nvmf_target_disconnect_tc2 00:36:39.983 ************************************ 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=541634 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 541634 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 541634 ']' 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 [2024-12-13 12:42:07.146180] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:39.983 [2024-12-13 12:42:07.146219] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.983 [2024-12-13 12:42:07.225085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:39.983 [2024-12-13 12:42:07.248264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.983 [2024-12-13 12:42:07.248302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.983 [2024-12-13 12:42:07.248309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.983 [2024-12-13 12:42:07.248315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.983 [2024-12-13 12:42:07.248323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.983 [2024-12-13 12:42:07.249794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:39.983 [2024-12-13 12:42:07.249888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:39.983 [2024-12-13 12:42:07.250015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:39.983 [2024-12-13 12:42:07.250016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 Malloc0 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 [2024-12-13 12:42:07.421499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 [2024-12-13 12:42:07.450483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=541659 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:39.983 12:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:41.898 12:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 541634 00:36:41.898 12:42:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 [2024-12-13 12:42:09.482485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 [2024-12-13 12:42:09.482697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Read completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.898 Write completed with error (sct=0, sc=8) 00:36:41.898 starting I/O failed 00:36:41.899 [2024-12-13 12:42:09.482892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:41.899 [2024-12-13 12:42:09.483013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.483041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.483152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.483163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.483311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.483321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.483535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.483566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.483694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.483726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.484845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.484872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.485079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.485092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.485191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.485202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.485394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.485427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.485618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.485879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.485913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.486023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.486034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.486133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.486358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.486391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.486542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.486577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.486757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.486803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.487006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.487039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.487160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.487194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.487386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.487420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.487550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.487583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.487828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.487863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.488050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.488085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.488264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.488298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.488412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.488447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.488593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.488628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.488826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.488861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.489005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.489044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.489230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.489256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.489364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.489389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.489657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.489682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.489833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.489860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.490032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.490058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.490290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.490316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.490512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.490538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.490736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.490762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.490888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.490913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.491031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.491309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.899 [2024-12-13 12:42:09.491335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.899 qpair failed and we were unable to recover it. 00:36:41.899 [2024-12-13 12:42:09.491458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.491484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.491584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.491755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.491794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.491956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.491982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.492155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.492180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.492295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.492321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.492441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.492698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.492724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.492947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.492974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.493604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.493641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.493820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.493846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.493972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.493997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.494171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.494196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.494311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.494335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.494591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.494616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.494845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.494871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.495065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.495090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.495269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.495293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.495518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.495552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.495737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.495771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.495935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.495970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.496115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.496161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.496304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.496338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.496545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.496578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.496830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.497026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.497059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.497249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.497283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.497400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.497433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.497672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.497706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.497883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.497919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.498104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.498138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.498318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.498352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.498554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.498588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.498828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.498864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.499056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.499089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.499206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.499240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.499369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.499402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.499522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.499555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.900 [2024-12-13 12:42:09.499684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.900 [2024-12-13 12:42:09.499716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.900 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.499908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.499943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.500069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.500102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.500275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.500308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.500503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.500543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.500796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.500831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.500960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.500993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.501165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.501199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.501343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.501377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.501627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.501661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.501875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.501911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.502071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.502212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.502246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.502440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.502474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.502590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.502625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.502749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.502788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.502930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.502965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.503096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.503130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.503274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.503308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.503501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.503535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.503706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.503741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.503957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.503993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.504122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.504156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.504350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.504384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.504592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.504626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.504822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.504857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.505053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.505087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.505279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.505313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.505518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.505551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.505803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.505838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 [2024-12-13 12:42:09.506019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.901 [2024-12-13 12:42:09.506053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.901 qpair failed and we were unable to recover it. 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Read completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.901 Write completed with error (sct=0, sc=8) 00:36:41.901 starting I/O failed 00:36:41.902 Read completed with error (sct=0, sc=8) 00:36:41.902 starting I/O failed 00:36:41.902 Read completed with error (sct=0, sc=8) 00:36:41.902 starting I/O failed 00:36:41.902 Read completed with error (sct=0, sc=8) 00:36:41.902 starting I/O failed 00:36:41.902 [2024-12-13 12:42:09.506695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:41.902 [2024-12-13 12:42:09.506932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.506992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.507132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.507168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.507310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.507344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.507595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.507629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.507843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.507879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.508074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.508107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.508281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.508333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.508533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.508567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.508838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.508873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.509016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.509050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.509164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.509199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.509472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.509506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.509762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.509805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.509942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.509976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.510102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.510134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.510322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.510651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.510684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.510889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.510923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.511117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.511150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.511348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.511381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.511531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.511564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.511706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.511739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.511912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.511947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.512142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.512176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.512305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.512338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.512466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.512500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.512766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.512812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.512983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.513017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.513208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.513242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.513374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.513407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.513672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.902 [2024-12-13 12:42:09.513705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.902 qpair failed and we were unable to recover it. 00:36:41.902 [2024-12-13 12:42:09.513949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.513985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.514129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.514163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.514307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.514348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.514540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.514573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.514867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.514902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.515097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.515131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.515266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.515299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.515446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.515479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.515609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.515643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.515895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.515929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.516119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.516152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.516349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.516384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.516629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.516663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.516881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.516916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.517054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.517088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.517292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.517326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.517594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.517629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.517814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.517850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.518037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.518228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.518262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.518392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.518426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.518698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.518732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.518942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.519128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.519161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.519353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.519385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.519504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.519539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.519723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.519757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.519965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.520000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.520143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.520177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.520331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.520364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.520664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.520848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.520884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.521001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.521035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.521321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.521555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.521830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.521865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.521994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.522028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.522218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.522251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.522565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.903 [2024-12-13 12:42:09.522599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.903 qpair failed and we were unable to recover it. 00:36:41.903 [2024-12-13 12:42:09.522791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.522827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.522956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.522990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.523233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.523266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.523479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.523518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.523695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.523729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.523915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.523949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.524121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.524155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.524333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.524366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.524577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.524611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.524747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.524789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.525034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.525066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.525243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.525277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.525477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.525511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.525776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.525826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.526030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.526064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.526202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.526235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.526372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.526405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.526549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.526583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.526813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.526848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.527026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.527059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.527203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.527237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.527499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.527533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.527720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.527754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.527938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.527973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.528112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.528146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.528289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.528351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.528650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.528683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.528906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.528942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.529063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.529097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.529212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.529246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.529360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.529394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.529669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.529703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.529902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.529937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.530075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.530109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.530303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.530336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.530548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.530581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.530772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.530821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.530957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.530991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.531186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.904 [2024-12-13 12:42:09.531220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.904 qpair failed and we were unable to recover it. 00:36:41.904 [2024-12-13 12:42:09.531422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.531455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.531630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.531665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.531941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.531976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.532111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.532144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.532455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.532495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.532691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.532724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.532982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.533018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.533283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.533318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.533607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.533641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.533907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.533943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.534184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.534217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.534523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.534557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.534836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.534871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.535147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.535181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.535467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.535500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.535797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.535832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.536031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.536065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.536268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.536301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.536591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.536626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.536826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.536861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.537105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.537138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.537379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.537412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.537680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.537714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.537852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.537887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.538013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.538046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.538232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.538265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.538535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.538569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.538701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.538734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.538947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.538982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.539252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.539286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.539576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.539610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.539759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.539966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.540000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.540252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.540286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.540529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.540562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.540746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.540779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.540989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.541023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.541162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.541195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.541320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.541354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.905 qpair failed and we were unable to recover it. 00:36:41.905 [2024-12-13 12:42:09.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.905 [2024-12-13 12:42:09.541568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.541686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.541719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.541924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.541959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.542091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.542126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.542289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.542412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.542452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.542710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.542745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.542983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.543019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.543163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.543197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.543347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.543382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.543493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.543527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.543634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.543669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.543882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.543918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.544144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.544177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.544350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.544384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.544575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.544609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.544874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.545037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.545071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.545284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.545318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.545461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.545496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.545761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.545805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.545932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.545967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.546097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.546321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.546491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.546525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.546802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.546838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.547017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.547050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.547164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.547198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.547477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.547510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.547804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.547839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.548035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.548068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.548187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.548221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.548444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.548479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.548668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.548701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.548934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.549084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.549117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.549327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.549361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.549615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.549649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.549826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.549862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.906 [2024-12-13 12:42:09.550007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.906 [2024-12-13 12:42:09.550042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.906 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.550174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.550206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.550325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.550357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.550564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.550599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.550737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.550771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.550978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.551013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.551221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.551260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.551399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.551433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.551638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.551671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.551882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.551917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.552107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.552140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.552285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.552319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.552427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.552459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.552723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.552757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.552897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.552932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.553208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.553242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.553523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.553557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.553682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.553716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.553966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.554002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.554223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.554377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.554411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.554673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.554706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.554938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.554974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.555109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.555142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.555458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.555492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.555675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.555708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.555880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.555915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.556056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.556090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.556224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.556258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.556461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.556494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.556689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.556723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.556859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.556893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.557124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.557158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.557514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.557592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.557814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.907 [2024-12-13 12:42:09.557854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.907 qpair failed and we were unable to recover it. 00:36:41.907 [2024-12-13 12:42:09.558053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.558088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.558235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.558269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.558515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.558548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.558737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.558772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.559008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.559043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.559251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.559284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.559479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.559514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.559723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.559757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.559958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.559992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.560196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.560229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.560408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.560442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.560631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.560664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.560920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.560956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.561087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.561121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.561387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.561420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.561663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.561697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.561909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.561945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.562059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.562093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.562290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.562323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.562612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.562645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.562888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.562924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.563134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.563167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.563423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.563456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.563638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.563672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.563869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.563904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.564084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.564123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.564309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.564344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.564478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.564511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.564779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.564823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.565005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.565039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.565235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.565270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.565600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.565634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.565880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.565915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.566141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.566175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.566372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.566405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.566681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.566714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.566917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.566952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.567090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.567124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.567299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.567333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.567551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-13 12:42:09.567585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.908 qpair failed and we were unable to recover it. 00:36:41.908 [2024-12-13 12:42:09.567801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.567837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.567951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.567986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.568198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.568231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.568482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.568516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.568633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.568667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.568840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.568876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.569075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.569108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.569250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.569285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.569553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.569586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.569834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.569870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.570066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.570100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.570279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.570313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.570446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.570480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.570681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.570716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.570943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.570977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.571121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.571155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.571349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.571464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.571498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.571770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.571825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.572005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.572171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.572206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.572333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.572366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.572544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.572578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.572881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.573166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.573200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.573339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.573372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.573690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.573764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.573925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.573961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.574162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.574195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.574338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.574372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.574570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.574604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.574803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.574839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.575070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.575212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.575247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.575444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.575478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.575810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.575846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.576093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.576127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.576284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.576470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.576504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-13 12:42:09.576799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-13 12:42:09.576840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.577039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.577074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.577272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.577307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.577441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.577475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.577738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.577772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.578033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.578068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.578262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.578296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.578587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.578621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.578745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.578779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.578919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.578953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.579100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.579134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.579342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.579376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.579622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.579656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.579880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.579916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.580118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.580152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.580309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.580343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.580519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.580553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.580820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.580856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.581068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.581103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.581229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.581263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.581507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.581541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.581795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.581830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.581981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.582015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.582209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.582244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.582352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.582383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.582651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.582922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.582959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.583105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.583334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.583369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.583576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.583611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.583733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.583767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.583986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.584022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.584234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.584269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.584454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.584488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.584739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.584773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.585082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.585116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.586578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.586631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.586929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.586966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.587196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.587230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.587374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-13 12:42:09.587408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-13 12:42:09.587665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.587708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-13 12:42:09.587993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.588028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-13 12:42:09.588226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.588261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-13 12:42:09.588480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.588515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-13 12:42:09.588751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.588794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-13 12:42:09.588989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.589024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-13 12:42:09.589316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-13 12:42:09.589351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:42.209 [2024-12-13 12:42:09.589494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.209 [2024-12-13 12:42:09.589529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.209 qpair failed and we were unable to recover it. 00:36:42.209 [2024-12-13 12:42:09.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.209 [2024-12-13 12:42:09.589736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.209 qpair failed and we were unable to recover it. 00:36:42.209 [2024-12-13 12:42:09.589976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.209 [2024-12-13 12:42:09.590013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.209 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.590209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.590243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.590527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.590560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.590750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.590794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.590985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.591220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.591255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.591507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.591542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.591694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.593166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.593220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.593544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.593579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.593807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.593842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.594139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.594174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.594317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.594350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.594596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.594630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.594830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.594866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.595067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.595101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.595343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.595378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.595567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.595600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.595804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.595839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.595963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.595997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.596242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.596276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.596412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.596446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.596645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.596680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.596880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.596917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.597097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.597130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.597309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.597343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.597481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.597516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.597763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.210 [2024-12-13 12:42:09.597833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.210 qpair failed and we were unable to recover it. 00:36:42.210 [2024-12-13 12:42:09.598130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.598164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.598350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.598385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.598563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.598598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.598793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.598829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.599025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.599060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.599197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.599232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.599517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.599551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.599673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.599708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.599954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.600007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.600124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.600158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.600373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.600408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.600598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.600633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.600847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.600883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.601076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.601110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.601222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.601256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.601487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.601522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.601681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.601715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.601951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.601988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.602129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.602164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.602362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.602396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.602664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.602698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.603009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.603045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.603297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.603333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.211 [2024-12-13 12:42:09.603521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.211 [2024-12-13 12:42:09.603556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.211 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.603733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.603767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.603921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.603957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.604082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.604115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.604361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.604394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.604600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.604634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.604907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.604938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.605118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.605155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.605395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.605425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.605692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.605723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.605947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.605979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.606179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.606210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.606440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.606471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.606760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.606851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.606998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.607029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.607230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.607260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.607467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.607497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.607613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.607644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.607839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.607871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.608138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.608355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.608386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.608572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.608603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.608798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.608831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.609087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.609119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.609317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.609350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.609572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.212 [2024-12-13 12:42:09.609604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.212 qpair failed and we were unable to recover it. 00:36:42.212 [2024-12-13 12:42:09.609735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.609796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.609991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.610023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.610211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.610242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.610385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.610417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.610631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.610664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.610872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.610906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.611027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.611059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.611350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.611385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.611643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.611678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.611924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.611959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.612156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.612191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.612396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.612432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.612629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.612662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.612851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.612887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.613097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.613131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.613262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.613296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.613505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.613540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.613810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.213 [2024-12-13 12:42:09.613845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.213 qpair failed and we were unable to recover it. 00:36:42.213 [2024-12-13 12:42:09.613983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.217 [2024-12-13 12:42:09.614017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.217 qpair failed and we were unable to recover it. 00:36:42.217 [2024-12-13 12:42:09.614150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.217 [2024-12-13 12:42:09.614185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.217 qpair failed and we were unable to recover it. 00:36:42.217 [2024-12-13 12:42:09.614364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.217 [2024-12-13 12:42:09.614398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.217 qpair failed and we were unable to recover it. 00:36:42.217 [2024-12-13 12:42:09.614666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.217 [2024-12-13 12:42:09.614706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.614953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.614988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.615137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.615170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.615355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.615607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.615642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.615884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.615922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.616073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.616109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.616294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.616328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.616575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.616765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.616811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.617016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.617052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.617228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.617263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.617474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.617508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.617750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.617795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.618083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.618118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.618227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.618260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.618474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.618509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.618707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.618741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.618885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.618920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.619110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.619144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.619358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.619391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.619637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.619671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.619894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.619931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.620199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.620233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.620349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.620383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.620566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.620600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.620829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.620891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.621032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.621068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.621251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.621286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.621479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.621515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.621823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.621860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.622043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.622078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.622211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.622245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.622458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.622493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.622687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.622721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.623002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.623038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.623245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.623279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.623569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.623604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.623741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.623776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.623971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.624005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.624222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.624262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.624471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.624506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.624769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.624840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.625035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.625071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.625275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.625309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.625586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.625620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.625907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.626135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.626170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.626311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.626346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.626542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.626576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.626700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.626735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.626941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.218 [2024-12-13 12:42:09.626978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.218 qpair failed and we were unable to recover it. 00:36:42.218 [2024-12-13 12:42:09.627162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.627197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.627457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.627492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.627637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.627671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.627867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.627903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.628198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.628234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.628455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.628490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.628611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.628645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.628946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.628983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.629269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.629304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.629499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.629534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.629731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.629766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.629920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.629956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.630282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.630316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.630522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.630557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.630691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.630725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.630878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.630915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.631185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.631219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.631483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.631519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.631842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.631880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.632060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.632094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.632284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.632319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.632503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.632539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.632852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.633018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.633052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.633302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.633337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.633452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.633486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.633731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.633766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.633979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.634013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.634171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.634213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.634335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.634370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.634553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.634588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.634859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.634895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.635102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.635136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.635435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.635472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.635598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.635633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.635756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.635804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.636012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.636048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.636233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.636267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.636551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.636587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.636766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.636812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.219 [2024-12-13 12:42:09.636931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.219 [2024-12-13 12:42:09.636966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.219 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.637100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.637134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.637323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.637357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.637557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.637592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.637776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.637844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.638096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.638132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.638359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.638394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.638595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.638630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.638823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.638860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.639057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.639092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.639286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.639319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.639578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.639614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.639922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.640051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.640087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.640220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.640255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.640559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.640594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.640763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.640811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.641021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.641057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.641210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.641246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.641478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.641514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.641693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.641729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.641877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.641915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.642119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.642154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.642282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.642318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.642595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.642630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.642945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.643086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.643120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.643304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.643339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.643485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.643526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.643792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.643828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.643980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.644014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.644221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.644256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.644530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.644565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.644707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.644742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.644975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.645011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.645264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.645298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.645573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.645608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.645803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.645840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.645976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.646011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.646192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.646226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.646531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.646567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.646858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.646895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.647190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.647226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.647531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.647565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.647846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.647883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.648025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.648059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.648261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.648296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.648610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.648644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.648908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.648944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.649135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.649170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.649306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.649341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.649610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.649644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.649851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.649888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.650039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.650075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.650274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.650309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.650497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.650534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.220 [2024-12-13 12:42:09.650719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.220 [2024-12-13 12:42:09.650754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.220 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.651032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.651089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.651279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.651315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.651574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.651609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.651799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.651836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.652147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.652182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.652305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.652339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.652530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.652566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.652851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.652887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.653146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.653180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.653433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.653468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.653679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.653715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.653925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.653967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.654291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.654327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.654646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.654681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.654977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.655012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.655303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.655338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.655620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.655655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.655954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.655990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.656257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.656292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.656446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.656482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.656732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.656767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.656922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.656959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.657164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.657199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.657398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.657434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.657636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.657671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.657984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.658020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.658204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.658239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.658371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.658406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.658610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.658644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.658838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.658874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.659165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.659201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.659482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.659517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.659769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.659814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.660028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.660063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.660270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.660306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.660436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.660471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.660680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.660715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.661000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.661035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.661311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.661347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.661633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.661669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.661795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.661834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.662045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.662080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.662286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.662320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.662546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.662581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.662839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.662875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.663132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.663166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.663348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.663383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.663581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.663615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.663955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.664093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.664128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.664311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.664344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.664591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.664633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.664912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.664948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.665150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.665185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.665427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.665463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.665662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.665698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.665955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.665991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.666257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.666293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.666494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.221 [2024-12-13 12:42:09.666529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.221 qpair failed and we were unable to recover it. 00:36:42.221 [2024-12-13 12:42:09.666743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.666778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.667037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.667272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.667307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.667575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.667610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.667814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.667851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.668048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.668083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.668296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.668331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.668554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.668752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.668799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.669024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.669062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.669248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.669283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.669609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.669644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.669905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.669943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.670143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.670178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.670388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.670422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.670542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.670576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.670901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.670937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.671075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.671110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.671310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.671346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.671600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.671683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.672049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.672143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.672380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.672419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.672619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.672655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.672879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.672917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.673120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.673154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.673291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.673327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.673621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.673880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.673916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.674194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.674229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.674514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.674549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.674831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.674866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.675004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.675038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.675238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.675291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.675513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.675548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.675807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.675844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.676036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.676070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.676301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.676335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.676470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.676506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.676694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.676729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.677018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.677054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.677186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.677220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.677407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.677442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.677630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.677666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.677956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.677993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.678152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.678187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.678307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.678343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.678631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.678667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.678912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.678949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.679093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.679127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.679328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.679364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.679584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.679618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.679824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.679860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.680059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.680094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.222 qpair failed and we were unable to recover it. 00:36:42.222 [2024-12-13 12:42:09.680369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.222 [2024-12-13 12:42:09.680404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.680600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.680635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.680861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.680897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.681101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.681135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.681328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.681636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.681669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.681820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.681858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.682047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.682081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.682284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.682320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.682585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.682619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.682820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.682857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.683073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.683107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.683289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.683324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.683508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.683543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.683744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.683793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.683999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.684037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.684302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.684337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.684539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.684575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.684837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.684874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.685178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.685217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.685450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.685686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.685721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.686013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.686048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.686321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.686356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.686644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.686678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.686954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.686990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.687201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.687235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.687420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.687455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.687603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.687637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.687912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.687948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.688134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.688169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.688415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.688449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.688633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.688667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.688874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.688911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.689133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.689167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.689458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.689493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.689676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.689710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.689913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.689950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.690154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.690189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.690394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.690429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.690729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.690763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.691023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.691059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.691357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.691392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.691609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.691643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.691859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.691895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.692162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.692196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.692422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.692462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.692693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.692727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.693019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.693055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.693258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.693293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.693483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.693517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.693710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.693744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.693890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.693926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.694201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.694236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.694427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.694461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.694739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.694774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.694971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.695007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.695203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.695238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.695474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.695685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.695733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.695874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.695911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.696099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.696133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.696420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.696456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.696757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.696802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.696991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.697026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.697206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.697515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.697550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.697743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.697777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.697976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.698012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.698238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.698272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.698473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.698816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.698853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.699128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.699163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.699375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.699409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.699533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.223 [2024-12-13 12:42:09.699568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.223 qpair failed and we were unable to recover it. 00:36:42.223 [2024-12-13 12:42:09.699679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.699714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.699995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.700031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.700236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.700270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.700406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.700442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.700718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.700753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.700945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.700981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.701242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.701277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.701554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.701589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.701879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.701915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.702151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.702357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.702392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.702624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.702663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.702971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.703007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.703201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.703236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.703438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.703473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.703610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.703644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.703899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.703934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.704158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.704382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.704416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.704619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.704655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.704854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.704890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.705078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.705114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.705346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.705381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.705656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.705691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.705895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.705937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.706151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.706186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.706320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.706355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.706605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.706640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.706753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.706793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.707086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.707121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.707329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.707364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.707558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.707594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.707729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.707764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.707960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.707997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.708269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.708304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.708572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.708607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.708841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.708876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.709002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.709039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.709328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.709364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.709641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.709675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.709959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.709996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.710273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.710309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.710587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.710622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.710780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.710828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.711127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.711163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.711390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.711425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.711611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.711646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.711929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.711970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.712191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.712226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.712371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.712406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.712670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.712705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.712910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.712947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.713227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.713262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.713497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.713532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.713666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.713700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.713959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.713995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.714274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.714309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.714539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.714573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.714850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.714886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.715166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.715201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.715503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.715538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.715806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.715842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.716003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.716038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.716240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.716274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.716411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.716453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.716758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.716801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.716936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.716971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.717221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.717256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.717513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.717805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.224 [2024-12-13 12:42:09.717847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.224 qpair failed and we were unable to recover it. 00:36:42.224 [2024-12-13 12:42:09.718134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.718169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.718426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.718461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.718682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.718716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.718964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.719000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.719262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.719298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.719489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.719524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.719805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.719841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.720053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.720088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.720232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.720268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.720485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.720520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.720719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.720755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.720970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.721006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.721211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.721246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.721525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.721561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.721839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.721876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.722065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.722100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.722297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.722335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.722529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.722565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.722841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.722879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.723138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.723174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.723474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.723509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.723799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.723842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.724048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.724083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.724337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.724373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.724626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.724661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.724865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.724901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.725054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.725091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.725389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.725425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.725627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.725661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.725966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.726224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.726259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.726557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.726592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.726742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.726777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.727057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.727093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.727350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.727384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.727624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.727659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.727989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.728027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.728266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.728301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.728545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.728579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.728875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.728911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.729144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.729179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.729296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.729331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.729534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.729567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.729697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.729965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.730001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.730207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.730242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.730448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.730484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.730677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.730712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.730939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.730976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.731190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.731225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.731484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.731774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.731820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.732030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.732067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.732328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.732363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.732619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.732655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.732842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.732881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.733019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.733054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.733189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.733348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.733383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.733659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.733694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.733893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.733930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.734093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.734132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.734296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.734544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.734580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.734800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.734835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.735057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.735092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.735237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.225 [2024-12-13 12:42:09.735271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.225 qpair failed and we were unable to recover it. 00:36:42.225 [2024-12-13 12:42:09.735495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.735529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.735736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.735771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.736001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.736036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.736237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.736273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.736551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.736586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.736804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.736843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.737031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.737066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.737263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.737298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.737610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.737645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.737841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.737879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.738025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.738060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.738284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.738318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.738514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.738734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.738769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.738985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.739020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.739277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.739311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.739543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.739578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.739862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.739899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.740056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.740090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.740285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.740320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.740568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.740604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.740912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.740948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.741089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.741121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.741325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.741356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.741558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.741588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.741889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.741921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.742111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.742141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.742396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.742428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.742708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.742739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.742958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.742991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.743127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.743156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.743285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.743316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.743436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.743467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.743724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.743755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.743960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.743999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.744274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.744307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.744610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.744643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.744846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.744880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.745074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.745105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.745360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.745393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.745511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.745544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.745700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.745734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.746003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.746036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.746229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.746263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.746580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.746612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.746822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.746855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.746985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.747017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.747227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.747261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.747490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.747526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.747745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.747788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.747927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.747963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.748082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.748346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.748382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.748646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.748682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.748865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.748902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.749088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.749122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.749322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.749358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.749566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.749600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.749853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.749889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.750093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.750127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.750421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.750456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.750705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.750740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.750905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.750941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.751129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.751163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.751377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.751412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.751667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.751702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.751988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.752024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.752330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.226 [2024-12-13 12:42:09.752365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.226 qpair failed and we were unable to recover it. 00:36:42.226 [2024-12-13 12:42:09.752486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.752520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.752770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.753117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.753153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.753350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.753386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.753614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.753648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.753912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.753948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.754086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.754127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.754243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.754278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.754475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.754510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.754764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.754810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.754968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.755187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.755222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.755425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.755459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.755717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.755753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.755959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.755994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.756194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.756229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.756501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.756537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.756761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.756807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.756998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.757032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.757333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.757367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.757659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.757694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.757839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.757875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.758090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.758124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.758328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.758361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.758562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.758597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.758874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.758909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.759094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.759128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.759315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.759351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.759569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.759603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.759878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.759914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.760107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.760141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.760279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.760314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.760446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.760480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.760740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.760776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.761039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.761074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.761295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.761331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.761571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.761606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.761737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.762074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.762110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.762265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.762301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.762503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.762538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.762838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.762876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.763062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.763097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.763279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.763314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.763659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.763693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.763897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.763934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.764058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.764103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.764226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.764261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.764523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.764558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.764687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.764721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.764973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.765009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.765266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.765301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.765604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.765637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.765860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.765896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.766009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.766043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.766235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.766270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.766563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.766599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.766871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.766907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.767161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.767195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.227 qpair failed and we were unable to recover it. 00:36:42.227 [2024-12-13 12:42:09.767428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.227 [2024-12-13 12:42:09.767462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.767666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.767702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.767891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.767926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.768118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.768153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.768405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.768439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.768632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.768666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.768859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.768896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.769148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.769183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.769363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.769397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.769653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.769689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.769940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.769976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.770184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.770219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.770422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.770457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.770573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.770606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.770890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.770927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.771234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.771269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.771545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.771579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.771860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.771896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.772149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.772184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.772435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.772469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.772766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.772812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.773000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.773035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.773314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.773348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.773598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.773634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.773831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.773867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.774144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.774179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.774366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.774400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.774663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.774703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.774900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.774936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.775120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.775153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.775352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.775387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.775654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.775688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.775908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.775944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.776060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.776094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.776346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.776380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.776665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.776700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.776944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.776980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.777206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.777240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.777536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.777765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.777808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.778060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.778094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.778351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.778386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.778683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.778717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.778942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.778979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.779241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.779275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.779548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.779583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.779857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.779893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.780179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.780213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.780488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.780523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.780732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.780766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.781066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.781198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.781233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.781452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.781486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.781681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.781716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.781909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.781945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.782145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.782180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.782461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.782494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.782728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.782762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.782963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.782997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.783143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.783178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.783443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.783478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.783723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.783977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.784013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.784314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.784349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.784638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.784672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.784946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.784983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.785274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.785308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.785502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.785543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.785847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.785883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.786174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.786432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.786467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.786740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.786775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.787062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.787098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.787236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.228 [2024-12-13 12:42:09.787270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.228 qpair failed and we were unable to recover it. 00:36:42.228 [2024-12-13 12:42:09.787568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.787604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.787897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.787934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.788218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.788251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.788455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.788490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.788675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.788709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.788903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.788939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.789210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.789245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.789556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.789591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.789868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.790221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.790444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.790480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.790738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.790773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.790894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.790929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.791123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.791158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.791431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.791465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.791747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.791789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.792062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.792097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.792239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.792273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.792546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.792581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.792862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.792899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.793170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.793205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.793491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.793526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.793800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.793837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.794123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.794158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.794433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.794468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.794757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.794798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.795053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.795087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.795275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.795310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.795513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.795547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.795820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.795856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.796109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.796144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.796257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.796290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.796543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.796578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.796764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.796814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.796997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.797031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.797290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.797324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.797657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.797919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.797955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.798238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.798273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.798486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.798520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.798706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.798740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.798967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.799003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.799256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.799290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.799475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.799510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.799803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.799838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.800021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.800056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.800357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.800392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.800671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.800706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.800986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.801022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.801167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.801202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.801378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.801412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.801685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.801720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.802009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.802045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.802319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.802352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.802602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.802638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.802940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.802976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.803172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.803207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.803428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.803462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.803719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.803754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.804056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.229 [2024-12-13 12:42:09.804092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.229 qpair failed and we were unable to recover it. 00:36:42.229 [2024-12-13 12:42:09.804304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.804340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.804590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.804624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.804816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.804852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.805036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.805071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.805344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.805379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.805600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.805635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.805914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.805950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.806234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.806270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.806547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.806582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.806794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.806830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.807031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.807067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.807268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.807302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.807554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.807589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.807886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.807928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.808205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.808241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.808515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.808550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.808838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.808874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.809140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.809176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.809302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.809336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.809618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.809652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.809852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.809889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.810188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.810221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.810331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.810362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.810519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.810554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.810770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.810816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.811071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.811106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.811383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.811417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.811610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.811644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.811860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.811896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.812148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.812183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.812441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.812476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.812662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.812697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.812894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.812930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.813138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.813173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.813461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.813495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.813718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.813753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.814018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.814053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.814297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.814330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.814537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.814571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.814849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.814885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.815168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.815202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.815481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.815517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.815804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.815842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.816050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.816085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.816367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.816401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.816651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.816686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.816964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.816999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.817281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.817316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.817521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.817556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.817795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.817831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.818130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.818165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.818527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.818561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.818745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.818792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.818978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.819025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.819249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.819283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.819612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.819647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.819954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.820161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.820196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.820460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.820496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.820767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.230 [2024-12-13 12:42:09.820816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.230 qpair failed and we were unable to recover it. 00:36:42.230 [2024-12-13 12:42:09.821011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.821046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.821322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.821356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.821493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.821527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.821805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.821842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.822037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.822072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.822345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.822379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.822585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.822620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.822925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.822962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.823218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.823253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.823466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.823519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.823770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.823817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.824015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.824050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.824319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.824354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.824577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.824612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.824868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.824904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.825161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.825196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.825449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.825484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.825735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.826079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.826114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.826308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.826342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.826625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.826660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.826856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.826892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.827079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.827113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.827302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.827337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.827628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.827663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.827855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.827891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.828077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.828111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.828363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.828397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.828649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.828685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.828915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.828952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.829203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.829237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.829448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.829483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.829667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.829701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.829924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.829967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.830173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.830208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.830464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.830499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.830684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.830719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.831014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.831051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.831330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.831366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.831593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.831628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.831939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.832139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.832174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.832367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.832401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.832533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.832569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.832775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.832839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.833067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.833102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.833248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.833284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.833521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.833555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.833833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.833870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.834068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.834103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.834356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.834391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.834620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.834654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.834907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.834943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.835244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.835280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.835490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.835524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.835727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.835762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.835960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.835996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.836110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.836145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.836326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.836360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.836634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.836669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.836953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.836990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.837195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.837230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.837417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.837452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.837655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.837689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.837825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.837861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.838162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.838197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.838482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.838516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.838798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.838834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.839185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.839464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.839499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.839685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.839720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.231 qpair failed and we were unable to recover it. 00:36:42.231 [2024-12-13 12:42:09.839864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.231 [2024-12-13 12:42:09.839901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.840177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.840211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.840408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.840448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.840586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.840621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.840874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.840911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.841196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.841230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.841506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.841541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.841731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.841766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.841929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.841963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.842248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.842283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.842487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.842522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.842801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.842837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.843027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.843063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.843182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.843217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.843404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.843439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.843647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.843681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.843888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.843926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.844134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.844168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.844428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.844463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.844647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.844683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.844888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.844925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.845131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.845166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.845314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.845349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.845601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.845636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.845933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.845969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.846106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.846140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.846369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.846404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.846704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.846739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.847035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.847072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.847368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.847403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.847604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.847639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.847939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.847976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.848234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.848268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.848405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.848440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.848665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.848700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.848974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.849011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.849194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.849229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.849417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.849452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.849683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.849718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.850031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.850067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.850256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.850290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.850548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.850582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.850720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.850760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.850978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.851013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.851311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.851346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.851609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.851644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.851850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.851887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.852169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.852203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.852392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.852427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.852688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.852723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.852882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.852919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.853190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.853224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.853502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.853537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.853820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.853858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.854076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.854110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.854371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.854406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.854622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.854657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.854934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.854970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.855248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.855283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.855508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.855543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.855803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.855839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.856098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.856418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.856452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.856728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.856763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.857049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.857084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.857360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.232 [2024-12-13 12:42:09.857394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.232 qpair failed and we were unable to recover it. 00:36:42.232 [2024-12-13 12:42:09.857592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.857626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.857816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.857852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.858120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.858155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.858384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.858418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.858671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.858706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.858989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.859026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.859215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.859250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.859365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.859398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.859515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.859550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.859756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.859800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.859954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.859989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.860241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.860277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.860406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.860442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.860694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.860728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.860995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.861032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.861309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.861343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.861539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.861574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.861832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.861870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.862171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.862205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.862430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.862464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.862723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.862758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.863000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.863035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.863338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.863373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.863576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.863610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.863875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.864176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.864211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.864492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.864527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.864720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.864754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.864949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.864985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.865116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.865150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.865411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.865446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.865721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.865757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.866061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.866095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.866366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.866401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.866517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.866553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.866840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.866877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.867108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.867142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.867410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.867446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.867590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.867625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.867901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.867937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.868081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.868117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.868439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.868474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.868742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.868777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.869013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.869054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.869356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.869391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.869535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.869570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.869799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.869835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.870029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.870066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.870363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.870397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.870685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.870720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.871010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.871048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.871254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.871289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.871566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.871601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.871805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.871842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.872127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.872161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.872431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.872467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.872600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.872636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.872951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.872988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.873304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.873340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.873564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.873599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.873802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.873838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.874100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.874134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.874438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.874473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.874730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.874765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.875032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.875067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.875358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.875392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.875524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.875559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.233 [2024-12-13 12:42:09.875839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.233 [2024-12-13 12:42:09.875874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.233 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.876080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.876114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.876415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.876449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.876591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.876626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.876814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.876851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.877132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.877166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.877431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.877466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.877669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.877705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.877964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.877999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.878251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.878286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.878566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.878601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.878881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.878917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.879201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.879236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.879462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.879496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.234 [2024-12-13 12:42:09.879719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.234 [2024-12-13 12:42:09.879754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.234 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.880041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.880077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.880342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.880386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.880608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.880644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.880897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.880934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.881118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.881153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.881356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.881391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.881616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.881649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.881905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.881942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.882245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.882279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.882546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.882581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.882862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.882898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.513 [2024-12-13 12:42:09.883105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.513 [2024-12-13 12:42:09.883139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.513 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.883320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.883355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.883551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.883586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.883797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.883832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.884116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.884151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.884452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.884487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.884696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.884731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.885081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.885117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.885328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.885363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.885637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.885671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.885955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.885992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.886180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.886215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.886415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.886450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.886729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.886764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.887047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.887082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.887270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.887304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.887587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.887622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.887808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.887844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.888110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.888341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.888376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.888555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.888589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.888818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.888854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.889131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.889166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.889303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.889337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.889641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.889677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.889953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.889990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.890127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.890161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.890465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.890500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.890684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.890719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.891009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.891045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.891317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.891357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.891617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.891652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.891951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.891987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.892113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.892147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.892372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.892406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.892589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.892624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.892879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.892915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.893190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.893225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.893477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.893513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.514 qpair failed and we were unable to recover it. 00:36:42.514 [2024-12-13 12:42:09.893696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.514 [2024-12-13 12:42:09.893731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.893963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.894000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.894299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.894333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.894616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.894651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.894928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.894964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.895099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.895133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.895316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.895352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.895629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.895663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.895855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.895891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.896098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.896133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.896406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.896441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.896657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.896926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.896962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.897239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.897274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.897485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.897520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.897722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.897757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.898055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.898091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.898412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.898626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.898661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.898844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.898881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.899015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.899049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.899280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.899314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.899595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.899630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.899912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.899948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.900226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.900261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.900541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.900576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.900859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.900896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.901116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.901150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.901346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.901381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.901639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.901674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.901950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.901987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.902268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.902316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.902499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.902534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.902823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.902860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.903046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.903080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.903381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.903417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.903681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.903716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.903985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.904022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.904309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.515 [2024-12-13 12:42:09.904346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.515 qpair failed and we were unable to recover it. 00:36:42.515 [2024-12-13 12:42:09.904661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.904697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.904982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.905019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.905204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.905238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.905504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.905539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.905732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.905767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.905977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.906011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.906296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.906332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.906568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.906604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.906854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.906890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.907169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.907204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.907486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.907521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.907837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.907873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.908169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.908204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.908455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.908489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.908618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.908653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.908834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.908871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.909019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.909054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.909327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.909362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.909545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.909579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.909854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.909891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.910095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.910130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.910436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.910471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.910772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.910820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.911078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.911113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.911300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.911335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.911535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.911570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.911844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.911880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.912064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.912098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.912303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.912338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.912614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.912648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.912924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.912961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.913158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.913194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.913445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.913485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.913687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.913721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.913987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.914023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.914307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.914341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.914565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.914600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.914871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.914907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.516 [2024-12-13 12:42:09.915120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.516 [2024-12-13 12:42:09.915155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.516 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.915339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.915374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.915579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.915615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.915889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.915925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.916164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.916200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.916470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.916504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.916708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.916759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.916997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.917033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.917225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.917259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.917453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.917489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.917764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.917812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.918104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.918140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.918415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.918451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.918708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.918743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.918972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.919009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.919281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.919315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.919594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.919628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.919911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.919948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.920223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.920258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.920543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.920578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.920859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.920895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.921178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.921213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.921491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.921792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.921829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.921941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.921974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.922225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.922260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.922468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.922503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.922684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.922720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.922860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.922897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.923149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.923183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.923323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.923358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.923632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.923667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.923862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.923897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.924090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.924126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.924405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.924445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.924653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.924688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.924874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.924911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.925118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.925153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.517 [2024-12-13 12:42:09.925356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.517 [2024-12-13 12:42:09.925391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.517 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.925596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.925631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.925834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.925870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.926070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.926105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.926299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.926334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.926614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.926649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.926868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.926905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.927199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.927234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.927442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.927476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.927751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.927799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.928012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.928047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.928236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.928271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.928522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.928557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.928836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.928872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.929123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.929158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.929343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.929656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.929692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.929955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.929992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.930285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.930319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.930586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.930622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.930908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.930944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.931218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.931253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.931542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.931577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.931887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.932148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.932183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.932330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.932364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.932569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.932603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.932823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.932860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.933092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde35e0 is same with the state(6) to be set 00:36:42.518 [2024-12-13 12:42:09.933477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.933557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.933803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.933844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.934052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.934089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.934299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.934333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.934659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.518 [2024-12-13 12:42:09.934694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.518 qpair failed and we were unable to recover it. 00:36:42.518 [2024-12-13 12:42:09.934846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.934883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.935089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.935124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.935328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.935362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.935628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.935662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.935941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.935978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.936135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.936170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.936367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.936401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.936591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.936627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.936826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.937105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.937138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.937414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.937448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.937731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.937768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.938117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.938430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.938465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.938667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.938702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.938853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.938889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.939013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.939198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.939234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.939361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.939394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.939509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.939544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.939691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.939725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.939957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.939993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.940212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.940246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.940450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.940484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.940758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.940803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.940941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.940976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.941224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.941259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.941477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.941511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.941800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.941837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.942023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.942057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.942321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.942356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.942606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.942640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.942837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.942872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.943149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.943183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.943397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.943433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.943630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.519 [2024-12-13 12:42:09.943664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.519 qpair failed and we were unable to recover it. 00:36:42.519 [2024-12-13 12:42:09.943862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.943898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.944088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.944123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.944382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.944417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.944615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.944650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.944862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.944899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.945148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.945183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.945438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.945471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.945759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.945802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.946048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.946231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.946265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.946488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.946523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.946703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.946738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.947027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.947063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.947267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.947302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.947485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.947519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.947801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.947837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.948110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.948163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.948372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.948407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.948659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.948907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.948944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.949209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.949250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.949512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.949548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.949827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.949865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.950080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.950114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.950301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.950335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.950559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.950594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.950801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.950837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.951059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.951335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.951370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.951624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.951659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.951860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.951896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.952030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.952065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.952265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.952301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.952583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.952618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.952845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.952882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.953090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.953126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.953378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.953413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.953621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.953656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.520 [2024-12-13 12:42:09.953923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.520 [2024-12-13 12:42:09.953960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.520 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.954241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.954276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.954555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.954589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.954848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.954885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.955071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.955105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.955360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.955394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.955581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.955617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.955876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.955912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.956162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.956197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.956502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.956538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.956825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.956861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.957052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.957088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.957342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.957376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.957659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.957693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.957887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.957924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.958223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.958276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.958459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.958494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.958689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.958724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.959007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.959044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.959345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.959379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.959588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.959623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.959812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.959849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.959979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.960019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.960205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.960240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.960553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.960589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.960846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.960883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.961087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.961122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.961396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.961431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.961669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.961856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.961894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.962087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.962122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.962396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.962431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.962699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.962734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.963004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.963040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.963340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.963376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.963587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.963623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.963916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.963953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.964141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.521 [2024-12-13 12:42:09.964175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.521 qpair failed and we were unable to recover it. 00:36:42.521 [2024-12-13 12:42:09.964300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.964336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.964540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.964575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.964844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.964881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.965030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.965066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.965342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.965377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.965582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.965616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.965757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.965803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.966057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.966287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.966322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.966602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.966637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.966755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.966802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.967114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.967150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.967431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.967466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.967664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.967698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.967906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.967943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.968223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.968257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.968539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.968574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.968761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.968815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.968932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.968966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.969149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.969185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.969439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.969474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.969671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.969706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.969985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.970022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.970218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.970253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.970433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.970473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.970729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.970765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.970967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.971004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.971283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.971317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.971509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.971544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.971750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.971798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.972054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.972089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.972308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.972343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.972618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.522 [2024-12-13 12:42:09.972654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.522 qpair failed and we were unable to recover it. 00:36:42.522 [2024-12-13 12:42:09.972859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.972897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.973086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.973122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.973313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.973644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.973679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.973864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.973901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.974102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.974137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.974336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.974372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.974639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.974675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.974879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.974915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.975115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.975150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.975425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.975460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.975718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.975754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.975968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.976004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.976252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.976287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.976503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.976538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.976833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.976870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.977073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.977108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.977361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.977395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.977698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.977734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.977976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.978013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.978296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.978330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.978605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.978640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.978858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.978894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.979095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.979129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.979337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.979373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.979557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.979592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.979904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.979941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.980198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.980233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.980484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.980519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.980824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.980861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.981140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.981175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.981466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.981712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.981748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.981957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.981994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.982264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.982299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.982480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.982516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.982803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.982839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.983076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.523 [2024-12-13 12:42:09.983111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.523 qpair failed and we were unable to recover it. 00:36:42.523 [2024-12-13 12:42:09.983353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.983389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.983668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.983703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.983982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.984019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.984218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.984253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.984560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.984594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.984856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.984893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.985148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.985184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.985472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.985508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.985716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.985750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.985949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.985985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.986264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.986300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.986498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.986533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.986733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.986769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.987006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.987042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.987249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.987283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.987535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.987571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.987703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.987739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.987946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.987983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.988121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.988157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.988378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.988413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.988552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.988588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.988812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.988850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.989129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.989165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.989358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.989393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.989588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.989622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.989874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.989911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.990099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.990135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.990390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.990425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.990602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.990817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.990853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.991069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.991104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.991316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.991352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.991584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.991619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.991768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.991820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.992075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.992111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.992391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.992425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.992625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.992661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.524 [2024-12-13 12:42:09.992928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.524 [2024-12-13 12:42:09.992965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.524 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.993277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.993311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.993504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.993539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.993802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.993839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.994131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.994167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.994440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.994475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.994685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.994720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.994876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.994913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.995122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.995158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.995349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.995383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.995665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.995699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.995923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.995960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.996150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.996186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.996436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.996470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.996751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.996803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.997012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.997047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.997235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.997271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.997466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.997501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.997795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.997833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.998096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.998131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.998340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.998375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.998667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.998703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.998980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.999016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.999258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.999296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.999556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.999593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:09.999802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:09.999837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.000151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.000186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.000438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.000473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.000659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.000696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.000835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.000872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.001092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.001147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.001476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.001527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.001770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.001841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.002074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.002292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.002339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.002559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.002858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.002918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.003100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.003147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.003315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.003362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.525 [2024-12-13 12:42:10.003622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.525 [2024-12-13 12:42:10.003668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.525 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.003903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.003954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.004112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.004158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.004392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.004437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.004671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.004714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.004986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.005061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.005265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.005319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.005573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.005616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.005868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.005934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.006101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.006150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.006344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.006406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.006586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.006630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.006862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.006927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.007135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.007177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.007346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.007394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.007536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.007570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.007733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.007770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.007934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.007970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.008246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.008281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.008502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.008537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.008736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.008770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.008996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.009031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.009164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.009197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.009392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.009428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.009678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.009714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.009926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.009962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.010172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.010207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.010411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.010446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.010702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.010736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.010898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.010934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.011089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.011123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.011253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.011287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.011551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.011586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.011726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.011760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.011977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.012014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.012159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.012194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.012404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.012439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.012727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.012768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.012951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.012987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.526 [2024-12-13 12:42:10.013119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.526 [2024-12-13 12:42:10.013153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.526 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.013307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.013342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.013546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.013581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.013843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.013882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.014037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.014073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.014223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.014258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.014486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.014521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.014736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.014771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.014934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.014971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.015179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.015214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.015469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.015764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.016085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.016122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.016332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.016367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.016561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.016597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.016806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.016843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.017053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.017087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.017290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.017325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.017549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.017582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.017799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.017836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.017993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.018028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.018244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.018277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.018427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.018462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.018644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.018678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.018910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.018947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.019144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.019179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.019392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.019426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.019633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.019667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.019907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.019943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.020096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.020132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.020337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.020372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.020603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.020638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.020921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.020959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.527 qpair failed and we were unable to recover it. 00:36:42.527 [2024-12-13 12:42:10.021112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.527 [2024-12-13 12:42:10.021148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.021340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.021631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.021666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.021817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.021853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.022070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.022106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.022288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.022329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.022465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.022500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.022704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.022739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.022908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.022946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.023224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.023258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.023561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.023598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.023883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.023921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.024050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.024086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.024294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.024328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.024634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.024668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.024867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.024903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.025155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.025193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.025427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.025464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.025648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.025683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.025905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.025943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.026197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.026232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.026505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.026542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.026749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.026796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.026981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.027016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.027304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.027339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.027614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.027648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.027913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.027949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.028255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.028289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.028450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.028494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.028658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.028704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.028891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.028941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.029177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.029222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.029422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.029470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.029672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.029732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.029911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.029953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.030177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.030222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.030432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.030475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.030678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.030733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.031009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.031104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.528 qpair failed and we were unable to recover it. 00:36:42.528 [2024-12-13 12:42:10.031290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.528 [2024-12-13 12:42:10.031338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.031488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.031525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.031658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.031697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.031928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.031961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.032200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.032349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.032384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.032500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.032543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.032724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.032757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.032918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.032952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.033159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.033193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.033329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.033665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.033949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.033984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.034135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.034168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.034393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.034426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.034633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.034666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.034862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.034896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.035053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.035087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.035223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.035256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.035455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.035488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.035687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.035720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.035851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.035886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.036079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.036112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.036370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.036554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.036586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.036862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.036897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.037044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.037077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.037207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.037240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.037524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.037557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.037835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.037872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.038101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.038133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.038343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.038375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.038502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.038534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.038738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.038772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.038975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.039010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.039213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.039247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.039383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.039415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.039629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.039662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.529 [2024-12-13 12:42:10.039909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.529 [2024-12-13 12:42:10.039944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.529 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.040153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.040188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.040449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.040483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.040672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.040911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.040946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.041173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.041205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.041344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.041378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.041528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.041561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.041676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.041714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.041917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.041951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.042231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.042263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.042542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.042574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.042865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.042899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.043165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.043198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.043488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.043521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.043632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.043664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.043867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.043902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.044094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.044129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.044260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.044292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.044493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.044525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.044663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.044696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.044903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.044938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.045077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.045110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.045314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.045351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.045537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.045575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.045701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.045734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.045915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.045949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.046160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.046326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.046360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.046586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.046619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.046922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.046958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.047149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.047183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.047407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.047440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.047587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.047619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.047802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.047836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.048049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.048083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.048280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.048312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.048595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.048755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.048799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.530 [2024-12-13 12:42:10.049013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.530 [2024-12-13 12:42:10.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.530 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.049297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.049331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.049470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.049503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.049705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.049737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.049938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.049973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.050217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.050368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.050401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.050631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.050664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.050895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.050930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.051088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.051380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.051413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.051592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.051626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.051905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.051940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.052125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.052157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.052283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.052531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.052564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.052810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.052844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.053052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.053085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.053308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.053340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.053565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.053599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.053911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.053945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.054128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.054161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.054437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.054470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.054624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.054657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.054848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.054883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.055038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.055075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.055269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.055302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.055577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.055613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.055804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.055838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.056025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.056058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.056266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.056301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.056462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.056496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.056629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.531 [2024-12-13 12:42:10.056661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.531 qpair failed and we were unable to recover it. 00:36:42.531 [2024-12-13 12:42:10.056862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.056897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.057015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.057047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.057162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.057194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.057457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.057539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.057831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.057913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.058222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.058258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.058562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.058597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.058804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.058840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.058982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.059015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.059221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.059255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.059508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.059542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.059819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.059855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.060163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.060396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.060428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.060687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.060720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.060878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.060914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.061039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.061081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.061336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.061369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.061659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.061692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.061899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.061936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.062139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.062173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.062300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.062334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.062650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.062863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.062900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.063174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.063207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.063473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.063506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.063812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.064024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.064252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.064286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.065698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.065766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.065986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.066021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.066190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.066225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.066444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.066492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.066756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.066838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.067150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.067205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.067418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.067468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.067686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.067738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.067935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.532 [2024-12-13 12:42:10.067984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.532 qpair failed and we were unable to recover it. 00:36:42.532 [2024-12-13 12:42:10.068168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.068218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.068563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.068620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.068870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.068914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.069458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.069502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.069718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.069753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.070051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.070095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.070321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.070356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.070559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.070592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.070854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.070891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.071052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.071085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.071292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.071330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.071584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.071615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.071815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.071850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.072046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.072079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.072228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.072489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.072522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.072775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.072825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.073059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.073093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.073296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.073329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.073588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.073621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.073755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.073809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.073973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.074007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.074189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.074223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.074378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.074411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.074666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.074699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.074921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.074958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.075107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.075140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.075329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.075361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.075504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.075538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.075809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.075845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.076135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.076169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.076359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.076393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.076587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.076622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.076824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.076859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.077061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.077095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.533 qpair failed and we were unable to recover it. 00:36:42.533 [2024-12-13 12:42:10.077233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.533 [2024-12-13 12:42:10.077266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.077521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.077554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.077755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.077800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.078062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.078097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.078291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.078325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.078620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.078654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.078926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.078960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.079226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.079261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.079392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.079425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.079676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.079708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.080010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.080053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.080199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.080235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.080545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.080578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.080803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.080840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.081046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.081080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.081329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.081362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.081644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.081677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.081830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.081864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.082068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.082103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.082380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.082420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.082676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.082854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.082893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.083093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.083127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.083327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.083360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.083596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.083631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.083862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.083898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.084195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.084228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.084415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.084448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.084748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.084791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.084962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.084996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.085143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.085177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.085396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.085429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.085637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.085670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.085815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.085852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.086073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.086107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.086303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.086337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.086647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.086680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.086952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.086987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.087237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.087270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.534 [2024-12-13 12:42:10.087407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.534 [2024-12-13 12:42:10.087440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.534 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.087550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.087583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.087765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.087814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.088021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.088055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.088312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.088345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.088531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.088565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.088794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.088830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.088990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.089023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.089163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.089198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.089329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.089361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.089647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.089680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.089850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.089892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.090102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.090136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.090359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.090393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.090634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.090666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.090866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.090902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.091053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.091086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.091270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.091303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.091453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.091487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.091733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.091766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.091930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.091965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.092154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.092187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.092366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.092399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.092590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.092623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.092859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.092893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.093031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.093067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.093181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.093213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.093339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.093373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.093561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.093595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.093813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.093848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.094054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.094087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.094321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.094354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.094627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.094660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.094934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.094969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.095167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.095201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.095400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.095436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.095627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.095662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.095877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.095913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.535 [2024-12-13 12:42:10.096134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.535 [2024-12-13 12:42:10.096170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.535 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.096382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.096416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.096716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.096749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.096969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.097005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.097189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.097222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.097426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.097460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.097595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.097628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.097851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.097886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.098039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.098075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.098215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.098247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.098451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.098486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.098634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.098668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.098777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.098821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.098942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.098981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.099103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.099136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.099331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.099383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.099603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.099637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.099831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.099865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.099993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.100025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.100232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.100266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.100459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.100494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.100678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.100854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.100889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.101022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.101055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.101280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.101314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.101508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.101545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.101693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.101829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.101865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.101988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.102022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.102301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.102337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.102552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.102589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.102802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.102835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.103027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.103060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.103255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.536 [2024-12-13 12:42:10.103287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.536 qpair failed and we were unable to recover it. 00:36:42.536 [2024-12-13 12:42:10.103488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.103522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.103638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.103670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.103803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.103838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.103981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.104014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.104201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.104234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.104413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.104446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.104583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.104617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.104842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.104876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.105068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.105100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.105285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.105318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.105447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.105479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.105605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.105638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.105764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.105826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.105939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.105973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.106154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.106187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.106342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.106375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.106582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.106615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.106868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.106903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.107039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.107072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.107187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.107226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.107343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.107590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.107623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.107806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.107842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.108055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.108087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.108284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.108318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.108470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.108503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.108695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.108729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.108958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.108992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.109108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.109141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.109326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.109359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.109541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.109574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.109876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.109910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.110033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.110067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.110260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.110293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.110534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.110567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.110682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.110714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.110968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.111002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.111134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.111166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.537 [2024-12-13 12:42:10.111309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.537 [2024-12-13 12:42:10.111342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.537 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.111606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.111638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.111828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.111863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.112050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.112083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.112200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.112232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.112345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.112378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.112585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.112617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.112767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.112912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.112947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.113138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.113171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.113449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.113482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.113690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.113723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.113888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.113922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.114071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.114104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.114235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.114269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.114403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.114437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.114632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.114665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.114803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.114837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.114960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.114993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.115106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.115139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.115270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.115303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.115496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.115537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.115662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.115694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.115859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.115893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.116152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.116185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.116373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.116406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.116667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.116702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.116833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.116868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.117065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.117101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.117290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.117327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.117453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.117486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.117743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.117777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.118054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.118087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.118286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.118319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.118524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.118558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.118751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.118793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.118991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.119024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.119141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.119180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.119383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.119416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.538 qpair failed and we were unable to recover it. 00:36:42.538 [2024-12-13 12:42:10.119600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.538 [2024-12-13 12:42:10.119634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.119753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.119799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.119937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.119971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.120084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.120127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.120329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.120364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.120543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.120576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.120761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.120808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.120963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.120996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.121152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.121186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.121405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.121438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.121711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.121744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.121911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.121944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.122079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.122111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.122252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.122287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.122431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.122468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.122623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.122655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.122803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.122838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.122969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.123001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.123136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.123168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.123333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.123366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.123489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.123521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.123722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.123756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.123894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.123934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.124045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.124078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.124254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.124418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.124537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.124571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.124704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.124738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.124968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.125003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.125134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.125167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.125299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.125332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.125467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.125500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.125633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.125666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.125875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.125937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.126159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.126228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.126436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.126475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.126684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.126739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.539 [2024-12-13 12:42:10.126945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.539 [2024-12-13 12:42:10.127022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.539 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.127237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.127275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.127423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.127457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.127631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.127665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.127806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.127842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.127977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.128011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.128146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.128184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.128323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.128355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.128478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.128509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.128623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.128653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.128775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.128822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.129077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.129109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.129237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.129278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.129389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.129421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.129594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.129626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.129901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.129935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.130191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.130222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.130400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.130433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.130622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.130653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.130923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.130957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.131218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.131250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.131463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.131495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.131756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.131796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.132011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.132045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.132266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.132298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.132437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.132469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.132705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.132737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.132993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.133028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.133204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.133235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.133497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.133529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.133818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.133855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.134053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.134084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.134373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.134405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.134535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.134567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.134856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.134890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.135167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.135198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.540 qpair failed and we were unable to recover it. 00:36:42.540 [2024-12-13 12:42:10.135397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.540 [2024-12-13 12:42:10.135430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.135602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.135634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.135926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.135959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.136188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.136226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.136477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.136509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.136776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.136818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.137118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.137149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.137350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.137382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.137508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.137539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.137794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.137826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.138025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.138056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.138252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.138284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.138540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.138572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.138875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.138908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.139167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.139199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.139413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.139445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.139694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.139725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.139932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.139965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.140165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.140197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.140456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.140487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.140664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.140696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.140893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.140926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.141177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.141208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.141453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.141485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.141592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.141624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.141910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.141942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.142116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.142148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.142371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.142403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.142675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.142706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.142926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.142960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.143155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.143187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.143399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.143431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.143646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.143678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.143878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.143912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.144113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.144145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.144364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.144396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.144639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.144670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.144855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.144887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.145163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.145195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.145458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.541 [2024-12-13 12:42:10.145490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.541 qpair failed and we were unable to recover it. 00:36:42.541 [2024-12-13 12:42:10.145679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.145710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.145906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.145940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.146206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.146237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.146425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.146457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.146662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.146698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.146824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.146858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.147117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.147148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.147433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.147465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.147572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.147604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.147871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.147904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.148123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.148155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.148427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.148458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.148713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.148745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.149102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.149138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.149407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.149438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.149585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.149617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.149724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.149756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.150038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.150070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.150261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.150293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.150421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.150453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.150719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.150751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.150991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.151062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.151278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.151314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.151525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.151558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.151737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.151769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.151978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.152011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.152307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.152339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.152617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.152649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.152903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.152936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.153125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.153158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.153341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.153372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.153599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.153641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.153886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.153924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.154219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.154252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.154546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.154578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.154768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.154815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.155012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.155044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.155249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.155281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.542 [2024-12-13 12:42:10.155551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.542 [2024-12-13 12:42:10.155583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.542 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.155887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.155920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.156177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.156209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.156452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.156485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.156713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.156745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.157040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.157077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.157369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.157401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.157669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.157703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.157912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.157950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.158133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.158166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.158430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.158462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.158586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.158618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.158834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.158867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.159128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.159160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.159358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.159391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.159664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.159696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.159884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.159917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.160091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.160123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.160309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.160341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.160650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.160682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.160916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.160951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.161221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.161253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.161501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.161533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.161744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.161776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.162073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.162107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.162305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.162336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.162590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.162622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.162893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.162928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.163116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.163341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.163372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.163617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.163649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.163921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.163954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.164198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.164229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.164493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.164530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.164821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.164990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.165022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.165199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.165231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.165339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.543 [2024-12-13 12:42:10.165371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.543 qpair failed and we were unable to recover it. 00:36:42.543 [2024-12-13 12:42:10.165631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.165662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.165926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.165963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.166167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.166200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.166445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.166478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.166721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.166753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.166935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.167010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.167144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.167180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.167382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.167415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.167549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.167581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.167853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.167888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.168107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.168140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.168335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.168366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.168577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.168608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.168898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.168931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.169182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.169213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.169510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.169541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.169838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.169871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.170137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.170169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.170364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.170396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.170576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.170895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.170929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.171122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.171153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.171368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.171407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.171592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.171623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.171912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.171945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.172136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.172167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.172411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.172453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.172678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.172711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.172900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.172934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.173192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.173223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.173513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.173545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.173842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.173876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.174071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.174103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.174234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.174266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.174485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.174517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.174735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.174767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.174975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.175008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.175256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.175288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.175432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.175464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.544 [2024-12-13 12:42:10.175709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.544 [2024-12-13 12:42:10.175740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.544 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.175881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.175914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.176127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.176159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.176459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.176754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.176798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.177053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.177089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.177294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.177326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.177589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.177620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.177887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.177921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.178216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.178248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.178535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.178568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.178771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.178815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.179061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.179093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.179229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.179260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.179481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.179512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.179657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.179693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.179965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.179999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.180124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.180158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.180354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.180387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.180638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.180672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.180964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.181109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.181140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.181383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.181417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.181636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.181670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.181851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.181891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.182081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.182114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.182289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.182320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.182498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.182530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.182804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.182838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.183031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.183063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.183362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.183395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.183664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.183695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.183881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.183914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.184189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.184220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.184364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.545 [2024-12-13 12:42:10.184396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.545 qpair failed and we were unable to recover it. 00:36:42.545 [2024-12-13 12:42:10.184593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.184625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.184859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.184893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.185092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.185124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.185319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.185352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.185532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.185563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.185772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.185812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.186064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.186097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.186348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.186379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.186570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.186601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.186810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.186844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.187111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.187143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.187290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.187322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.187432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.187464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.187657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.187690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.187863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.187897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.188167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.188199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.188496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.188534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.188737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.188769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.188970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.189003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.189139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.189170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.189460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.189492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.189702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.189733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.189945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.189978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.190100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.190132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.190349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.190381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.190591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.190622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.190802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.190835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.191043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.191075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.191204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.191236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.191461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.191493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.191703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.191735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.192004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.192038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.192245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.192277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.192468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.192500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.546 [2024-12-13 12:42:10.192746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.546 [2024-12-13 12:42:10.192779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.546 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.192989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.193023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.193296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.193328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.193514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.193548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.193779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.193822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.193956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.193988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.194191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.194223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.194400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.194432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.194682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.194713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.195019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.195306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.195339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.195611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.195642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.195869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.195903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.196115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.196148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.196342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.196373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.196672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.196703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.196925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.196958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.197141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.197172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.197381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.197414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.197673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.197705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.197962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.198291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.198323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.198520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.198553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.198749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.198800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.198997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.199030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.199211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.199243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.199533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.199565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.199745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.199776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.200004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.200036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.200245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.200277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.200533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.200566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.200756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.200800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.200996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.830 [2024-12-13 12:42:10.201028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.830 qpair failed and we were unable to recover it. 00:36:42.830 [2024-12-13 12:42:10.201210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.201242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.201468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.201499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.201744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.201775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.202040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.202072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.202217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.202249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.202440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.202471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.202696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.202728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.203005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.203039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.203325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.203356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.203554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.203586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.203846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.203880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.204080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.204112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.204376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.204408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.204659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.204691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.204997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.205030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.205241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.205273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.205419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.205452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.205582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.205620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.205914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.205947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.206240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.206272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.206408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.206439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.206736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.206767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.207005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.207038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.207224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.207255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.207463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.207495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.207768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.207812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.208070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.208101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.208299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.208331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.208605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.208636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.208925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.208959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.209100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.209131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.209388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.209421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.209718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.209749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.209978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.210012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.210264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.210315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.210601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.210632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.210923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.210957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.211142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.211173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.211441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.211473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.211723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.831 [2024-12-13 12:42:10.211754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.831 qpair failed and we were unable to recover it. 00:36:42.831 [2024-12-13 12:42:10.211946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.211979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.212234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.212265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.212466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.212498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.212776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.212820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.212969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.213000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.213190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.213221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.213490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.213523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.213728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.213760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.213965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.213998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.214180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.214211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.214404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.214436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.214640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.214673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.214883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.214917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.215191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.215223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.215503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.215535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.215841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.215875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.216129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.216161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.216383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.216415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.216696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.216733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.217034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.217068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.217330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.217363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.217575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.217609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.217883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.217917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.218198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.218230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.218430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.218462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.218718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.218755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.218899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.218933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.219159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.219191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.219397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.219429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.219612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.219644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.219904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.219938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.220142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.220173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.220439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.220471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.220595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.220626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.220763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.220802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.220929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.832 [2024-12-13 12:42:10.220961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.832 qpair failed and we were unable to recover it. 00:36:42.832 [2024-12-13 12:42:10.221162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.221192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.221442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.221475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.221677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.221708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.221938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.222081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.222112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.222246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.222278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.222533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.222575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.222719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.222749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.222977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.223010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.223251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.223289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.223485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.223517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.223658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.223690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.223914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.223948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.224215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.224247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.224442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.224474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.224732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.224763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.224967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.225001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.225116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.225148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.225335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.225367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.225552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.225584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.225803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.225836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.225978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.226009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.226196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.226229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.226373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.226405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.226606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.226638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.226881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.227072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.227106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.227233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.227264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.227384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.227416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.227598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.227632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.227815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.227849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.227979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.228010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.228142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.228173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.228303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.228335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.228477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.228510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.228663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.228693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.228973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.229007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.229160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.229191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.229440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.833 qpair failed and we were unable to recover it. 00:36:42.833 [2024-12-13 12:42:10.229670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.833 [2024-12-13 12:42:10.229706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.229894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.229927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.230082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.230114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.230237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.230270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.230393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.230426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.230561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.230599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.230796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.230830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.231085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.231118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.231263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.231295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.231404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.231437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.231545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.231578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.231718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.231756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.231896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.231929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.232125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.232157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.232292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.232324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.232441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.232474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.232600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.232632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.232884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.232917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.233173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.233205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.233352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.233383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.233569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.233601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.233727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.233758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.233958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.233991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.235161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.235222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.235460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.235495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.235649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.235682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.235892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.235926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.236056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.236088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.236266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.236299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.236478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.236511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.236644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.236677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.236811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.236846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.834 [2024-12-13 12:42:10.236972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.834 [2024-12-13 12:42:10.237005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.834 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.237208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.237243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.237383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.237416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.237606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.237638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.237770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.237813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.237929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.237960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.238094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.238126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.238254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.238287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.238406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.238438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.238544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.238575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.238760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.238803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.238917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.238957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.239094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.239125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.239266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.239298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.239414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.239447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.239568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.239600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.239729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.239761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.239954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.239987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.240100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.240135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.240267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.240300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.240443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.240477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.240604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.240635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.240766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.240810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.240956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.240989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.241109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.241140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.241298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.241331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.241446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.241478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.241607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.241639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.241753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.241796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.241940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.241973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.242215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.242247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.242364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.242397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.242519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.242551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.242668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.242700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.242833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.242867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.242985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.243018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.243209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.243243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.243432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.243464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.243655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.243688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.243828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.835 [2024-12-13 12:42:10.243862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.835 qpair failed and we were unable to recover it. 00:36:42.835 [2024-12-13 12:42:10.243985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.244016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.244155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.244188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.244307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.244340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.244458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.244490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.244622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.244654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.244797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.244846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.244970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.245118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.245280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.245419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.245573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.245795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.245936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.245968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.246087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.246118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.246343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.246373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.246548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.246625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.246810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.246854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.247069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.247107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.247327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.247364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.247529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.247574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.247709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.247752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.247932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.247976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.248119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.248162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.248332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.248370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.248501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.248543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.248752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.248803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.248962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.249007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.249153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.249203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.249397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.249425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.249567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.249599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.249830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.250014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.250042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.250222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.250250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.250360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.250391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.250582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.250617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.250752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.250779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.250930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.250957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.836 [2024-12-13 12:42:10.251076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.836 [2024-12-13 12:42:10.251124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.836 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.251314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.251342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.251518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.251544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.251672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.251699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.251825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.251858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.251969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.252120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.252270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.252423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.252575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.252725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.252890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.252923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.253037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.253071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.253191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.253220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.253404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.253432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.253557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.253585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.253761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.253794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.253975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.254002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.254122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.254153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.254343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.254371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.254524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.254550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.254727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.254755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.255930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.255959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.256076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.256107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.256286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.256313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.256557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.256584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.256695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.256726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.256853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.256883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.257066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.257093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.257205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.257237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.257530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.257557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.257740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.257773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.257895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.257926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.837 [2024-12-13 12:42:10.258048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.837 [2024-12-13 12:42:10.258082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.837 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.258195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.258225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.258339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.258372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.258486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.258518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.258644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.258671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.258873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.258902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.259881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.259905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.260886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.260910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.261168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.261191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.261290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.261316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.261490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.261513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.261691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.261712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.261809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.261833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.261942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.261964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.262098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.262234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.262359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.262507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.262703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.262893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.262991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.263253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.263419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.263550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.263759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.838 [2024-12-13 12:42:10.263887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.838 [2024-12-13 12:42:10.263917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.838 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.264011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.264036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.264203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.264225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.264320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.264518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.264539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.264646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.264667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.264874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.264898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.265011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.265033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.265147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.265168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.265335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.265357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.265540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.265561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.265687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.265708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.265823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.265846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.266010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.266032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.266146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.266168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.266406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.266428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.266625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.266647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.266756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.266778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.266893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.266915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.267026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.267047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.267140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.267166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.267268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.267462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.267483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.267598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.267620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.267798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.267821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.268963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.268986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.269079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.269104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.269265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.839 [2024-12-13 12:42:10.269286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.839 qpair failed and we were unable to recover it. 00:36:42.839 [2024-12-13 12:42:10.269387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.269410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.269616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.269635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.269802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.269824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.269933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.269953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.270112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.270130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.270306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.270324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.270429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.270453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.270563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.270740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.270758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.271042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.271124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.271278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.271313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.271434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.271468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.271578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.271610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.271719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.271750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.271942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.272153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.272280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.272395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.272587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.272712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.272839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.272859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.273914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.273934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.274869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.274904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.275093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.275125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.275236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.275268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.275396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.275428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.275604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.840 [2024-12-13 12:42:10.275636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.840 qpair failed and we were unable to recover it. 00:36:42.840 [2024-12-13 12:42:10.275763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.275807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.275983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.276016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.276122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.276156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.276341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.276373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.276555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.276587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.276790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.276823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.276951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.276983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.277199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.277241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.277361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.277516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.277549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.277656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.277688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.277819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.277857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.278963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.278995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.279186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.279218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.279352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.279384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.279516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.279549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.279670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.279702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.279829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.279863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.279973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.280118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.280279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.280418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.280565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.280774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.280946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.280978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.281107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.281140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.281254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.281286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.281484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.281516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.281700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.281772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.282108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.282148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.282262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.282294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.282402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.282433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.282539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.841 [2024-12-13 12:42:10.282572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.841 qpair failed and we were unable to recover it. 00:36:42.841 [2024-12-13 12:42:10.282709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.282739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.282863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.282897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.283005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.283039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.283233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.283265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.283493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.283525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.283644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.283676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.283870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.283903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.284086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.284118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.284250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.284281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.284535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.284567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.284697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.284729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.284927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.284961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.285067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.285099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.285211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.285243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.285425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.285457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.285564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.285596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.285775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.285820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.286001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.286032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.286240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.286272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.286383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.286414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.286599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.286631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.286740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.286771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.286965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.287004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.287200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.287232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.287407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.287438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.287548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.287581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.287864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.287897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.288025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.288058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.288248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.288280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.288456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.288488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.288661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.288692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.288809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.288844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.289037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.289069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.289192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.289224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.289487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.289519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.289700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.289732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.842 [2024-12-13 12:42:10.289872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.842 [2024-12-13 12:42:10.289907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.842 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.290111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.290143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.290348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.290379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.290500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.290532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.290666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.290699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.290874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.290907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.291090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.291122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.291330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.291362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.291483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.291514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.291635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.291666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.291779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.291824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.291935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.291967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.292209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.292240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.292425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.292462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.292657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.292819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.292853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.292977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.293008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.293189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.293220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.293340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.293373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.293686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.293722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.293937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.293970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.294152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.294185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.294366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.294397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.294501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.294535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.294642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.294884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.294918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.295030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.295062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.295266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.295309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.295447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.295488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.295687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.295721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.295946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.296053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.296085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.296194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.296227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.296347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.296379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.296576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.296609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.843 qpair failed and we were unable to recover it. 00:36:42.843 [2024-12-13 12:42:10.296795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.843 [2024-12-13 12:42:10.296829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.297007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.297038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.297216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.297248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.297423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.297458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.297567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.297599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.297724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.297763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.297905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.297938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.298118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.298150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.298275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.298307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.298493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.298525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.298633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.298665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.298848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.298882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.299073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.299105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.299297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.299329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.299453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.299487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.299608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.299641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.299826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.299868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.300045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.300077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.300247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.300279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.300488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.300520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.300634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.300665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.300875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.300908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.301119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.301154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.301262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.301293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.301506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.301538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.301732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.301764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.301938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.301973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.302151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.302182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.302442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.302475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.302596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.302629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.302737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.302980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.303013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.303152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.303206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.303323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.303370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.303539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.303565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.303672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.303700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.303875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.303903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.844 qpair failed and we were unable to recover it. 00:36:42.844 [2024-12-13 12:42:10.304071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.844 [2024-12-13 12:42:10.304096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.304231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.304372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.304401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.304497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.304521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.304771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.304806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.304963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.304988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.305243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.305398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.305439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.305556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.305588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.305772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.305807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.305994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.306020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.306114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.306136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.306248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.306274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.306394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.306435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.306529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.306553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.306724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.306751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.307020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.307048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.307276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.307301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.307473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.307499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.307745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.307770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.307880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.307903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.308014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.308039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.308238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.308264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.308432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.308463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.308636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.308662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.308834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.308861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.308964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.308988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.309080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.309105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.309217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.309242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.309404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.309436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.309616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.309755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.309795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.309976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.310008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.310186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.310218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.310402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.310433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.310654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.310727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.310876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.310914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.311038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.311071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.311266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.845 [2024-12-13 12:42:10.311300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.845 qpair failed and we were unable to recover it. 00:36:42.845 [2024-12-13 12:42:10.311423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.311455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.311579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.311611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.311718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.312047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.312118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.312324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.312364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.312476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.312508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.312757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.312803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.313124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.313156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.313337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.313370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.313578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.313617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.313741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.313773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.313987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.314022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.314192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.314224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.314354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.314386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.314516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.314548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.314722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.314755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.315050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.315083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.315274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.315306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.315494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.315526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.315724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.315756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.315979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.316013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.316132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.316163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.316346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.316378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.316566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.316599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.316793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.316837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.317046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.317257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.317289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.317398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.317430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.317632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.317663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.317837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.317872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.318059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.318091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.318219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.318251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.318386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.318418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.318680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.318712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.318974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.319141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.319174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.319389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.319427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.846 [2024-12-13 12:42:10.319698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.846 [2024-12-13 12:42:10.319731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.846 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.319937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.319970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.320104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.320136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.320379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.320411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.320650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.320682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.320868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.320905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.321170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.321202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.321310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.321342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.321473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.321506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.321687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.321720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.321826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.321860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.322149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.322181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.322309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.322343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.322539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.322590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.322856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.322890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.323064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.323095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.323274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.323305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.323551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.323583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.323751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.323791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.323982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.324013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.324183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.324215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.324329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.324361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.324500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.324531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.324654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.324685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.324820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.324856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.325051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.325223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.325255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.325425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.325458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.325629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.325660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.325849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.325883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.326054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.326086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.326299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.326536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.326567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.326744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.326776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.326974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.327006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.327180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.327211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.327400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.327432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.327624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.327657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.327848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.847 [2024-12-13 12:42:10.327985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.847 [2024-12-13 12:42:10.328023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.847 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.328291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.328323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.328500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.328530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.328700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.328733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.328873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.328909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.329118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.329150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.329262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.329294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.329531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.329563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.329744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.329775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.330010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.330044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.330170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.330201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.330379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.330411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.330625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.330655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.330837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.330871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.330979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.331011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.331228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.331258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.331443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.331475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.331600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.331631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.331818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.331852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.332094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.332126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.332298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.332329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.332573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.332604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.332712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.332744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.332878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.332910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.333022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.333053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.333238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.333270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.333522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.333553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.333744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.333775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.333974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.334009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.334126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.334157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.334275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.334307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.334546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.334578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.334763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.334826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.848 [2024-12-13 12:42:10.334935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.848 [2024-12-13 12:42:10.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.848 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.335079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.335111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.335330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.335361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.335541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.335573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.335694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.335726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.335937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.335970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.336145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.336177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.336365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.336403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.336581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.336613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.336798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.336831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.336952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.336984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.337172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.337203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.337379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.337411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.337580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.337611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.337841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.338050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.338081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.338215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.338246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.338509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.338541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.338662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.338693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.338933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.338967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.339067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.339100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.339303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.339334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.339506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.339538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.339776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.339821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.339933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.339965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.340147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.340178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.340350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.340382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.340516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.340547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.340791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.340824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.341013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.341044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.341213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.341245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.341451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.341482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.341611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.341642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.341817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.341857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.342139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.342171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.342406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.342438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.342563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.342768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.849 [2024-12-13 12:42:10.342820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.849 qpair failed and we were unable to recover it. 00:36:42.849 [2024-12-13 12:42:10.343014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.343046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.343223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.343255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.343446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.343477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.343590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.343621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.343752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.343794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.344052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.344084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.344275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.344307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.344500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.344531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.344825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.344859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.345048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.345085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.345213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.345244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.345349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.345380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.345568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.345599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.345776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.345816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.345928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.345960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.346080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.346111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.346317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.346348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.346539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.346571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.346833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.346868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.347054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.347087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.347327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.347496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.347528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.347651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.347683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.347868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.347900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.348020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.348051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.348240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.348273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.348544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.348737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.348769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.348896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.348949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.349143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.349175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.349278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.349310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.349585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.349617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.349725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.349756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.349943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.349976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.350180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.350213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.350464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.350496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.350824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.350986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.351018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.850 [2024-12-13 12:42:10.351204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.850 [2024-12-13 12:42:10.351235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.850 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.351410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.351441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.351560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.351592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.351830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.351864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.352125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.352157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.352261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.352293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.352500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.352531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.352655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.352687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.352869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.352902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.353149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.353180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.353415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.353447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.353582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.353620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.353825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.353857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.354099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.354131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.354340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.354372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.354612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.354643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.354852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.354887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.355076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.355108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.355348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.355379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.355566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.355598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.355729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.355761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.355961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.355994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.356237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.356269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.356454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.356487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.356609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.356641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.356843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.356876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.356992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.357024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.357135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.357166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.357436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.357468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.357658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.357689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.357964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.358081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.358113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.358231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.358262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.358381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.358413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.358594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.358626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.358746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.358777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.358978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.359012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.359196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.359228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.359424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.851 [2024-12-13 12:42:10.359457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.851 qpair failed and we were unable to recover it. 00:36:42.851 [2024-12-13 12:42:10.359640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.359671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.359774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.359831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.359941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.359973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.360084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.360115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.360296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.360327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.360495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.360527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.360644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.360675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.360867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.360900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.361077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.361343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.361374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.361613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.361644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.361819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.361852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.362114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.362151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.362346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.362377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.362643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.362674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.362804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.362845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.363024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.363055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.363244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.363274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.363384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.363415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.363584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.363616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.363732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.363763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.363954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.363986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.364162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.364194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.364457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.364487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.364621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.364652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.364765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.364808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.365109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.365141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.365321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.365352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.365535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.365566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.365698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.365729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.366002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.366271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.366302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.366500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.366531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.852 qpair failed and we were unable to recover it. 00:36:42.852 [2024-12-13 12:42:10.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.852 [2024-12-13 12:42:10.366832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.367009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.367041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.367218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.367249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.367443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.367475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.367593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.367625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.367831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.367865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.368007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.368039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.368291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.368323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.368560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.368592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.368764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.368803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.369072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.369104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.369362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.369394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.369585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.369616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.369803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.369837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.370035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.370067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.370246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.370277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.370513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.370545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.370802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.370845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.371062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.371093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.371200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.371238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.371426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.371629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.371661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.371842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.371875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.372050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.372082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.372264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.372295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.372423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.372454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.372694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.372725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.372919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.372952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.373200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.373231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.373404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.373442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.373686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.373718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.373940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.374058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.374090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.374266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.374298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.374402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.374434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.374547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.374579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.374753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.374795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.374935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.374967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.375070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.853 [2024-12-13 12:42:10.375102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.853 qpair failed and we were unable to recover it. 00:36:42.853 [2024-12-13 12:42:10.375270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.375302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.375469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.375500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.375619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.375650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.375860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.375894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.376016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.376048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.376279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.376310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.376507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.376538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.376658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.376689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.376812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.376844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.376974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.377007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.377177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.377208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.377310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.377342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.377542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.377574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.377811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.377844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.378024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.378055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.378179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.378211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.378381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.378412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.378693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.378726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.378923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.378958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.379083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.379115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.379300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.379338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.379575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.379607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.379796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.379829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.379955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.379988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.380179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.380211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.380418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.380450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.380587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.380619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.380796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.380829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.380961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.380993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.381230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.381263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.381507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.381538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.381726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.381758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.381952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.381985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.382182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.382213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.382416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.382448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.382551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.382582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.854 [2024-12-13 12:42:10.382704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.854 [2024-12-13 12:42:10.382736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.854 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.382935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.382970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.383090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.383122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.383232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.383263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.383449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.383481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.383691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.383723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.383868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.383901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.384017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.384049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.384238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.384269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.384503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.384534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.384776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.384821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.385019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.385052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.385249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.385281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.385399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.385431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.385623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.385655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.385830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.385863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.386106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.386138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.386326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.386358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.386493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.386525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.386712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.386743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.386942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.386983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.387166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.387197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.387439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.387470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.387658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.387690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.387826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.387865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.387978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.388198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.388230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.388410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.388442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.388569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.388601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.388726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.388757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.388889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.388923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.389094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.389125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.389231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.389263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.389385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.389416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.389545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.389576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.389816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.389849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.390093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.390124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.390320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.855 [2024-12-13 12:42:10.390351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.855 qpair failed and we were unable to recover it. 00:36:42.855 [2024-12-13 12:42:10.390532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.390563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.390679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.390711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.390837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.390873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.391009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.391041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.391284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.391315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.391509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.391541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.391814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.391847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.392088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.392120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.392310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.392342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.392523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.392555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.392753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.392791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.392915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.392946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.393140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.393171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.393301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.393333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.393505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.393536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.393673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.393705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.393819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.393852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.394025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.394056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.394319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.394351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.394589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.394621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.394821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.394856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.395120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.395152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.395262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.395294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.395512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.395544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.395829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.395862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.396004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.396035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.396159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.396382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.396413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.396540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.396571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.396830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.396863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.396989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.397021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.397136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.397168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.397405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.397436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.397608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.397639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.397830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.397864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.398037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.398070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.398331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.398363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.856 qpair failed and we were unable to recover it. 00:36:42.856 [2024-12-13 12:42:10.398466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.856 [2024-12-13 12:42:10.398498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.398621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.398652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.398763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.398819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.399043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.399077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.399250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.399281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.399393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.399425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.399557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.399589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.399844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.399879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.400081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.400114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.400229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.400262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.400502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.400534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.400706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.400738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.400871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.400905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.401124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.401319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.401351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.401527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.401559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.401823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.401857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.402052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.402084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.402288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.402320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.402426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.402457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.402633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.402665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.402767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.402806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.402912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.402944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.403114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.403146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.403356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.403387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.403508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.403540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.403816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.403857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.404051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.404083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.404255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.404286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.404392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.404430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.404550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.404581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.404750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.404811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.404936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.404967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.405097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.857 [2024-12-13 12:42:10.405129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.857 qpair failed and we were unable to recover it. 00:36:42.857 [2024-12-13 12:42:10.405376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.405407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.405526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.405557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.405803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.405836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.405944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.405975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.406169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.406340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.406371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.406620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.406651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.406780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.406833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.406943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.406974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.407176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.407207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.407394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.407426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.407676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.407708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.407844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.407880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.408121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.408153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.408290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.408418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.408450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.408645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.408677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.408806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.408838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.409027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.409059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.409178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.409211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.409389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.409420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.409553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.409585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.409736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.409824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.410039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.410076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.410265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.410297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.410480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.410511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.410692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.410724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.410915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.410947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.411050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.411081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.411294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.411326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.411608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.411639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.411899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.411932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.412055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.412087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.412223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.412255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.412461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.412492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.412701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.412741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.412871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.412904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.413173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.413203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.858 qpair failed and we were unable to recover it. 00:36:42.858 [2024-12-13 12:42:10.413373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.858 [2024-12-13 12:42:10.413404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.413524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.413555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.413728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.413759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.413955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.413987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.414250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.414281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.414466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.414497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.414679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.414710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.414888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.414921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.415104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.415135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.415326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.415358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.415525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.415556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.415750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.415791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.415976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.416008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.416183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.416214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.416382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.416413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.416649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.416680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.416854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.416886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.417073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.417105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.417363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.417395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.417530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.417561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.417740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.417772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.418048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.418080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.418209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.418240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.418361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.418393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.418620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.418688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.418920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.418958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.419069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.419102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.419271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.419302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.419424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.419456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.419666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.419698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.419962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.419995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.420166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.420197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.420382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.420414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.420665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.420696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.420823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.420859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.420983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.421015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.421201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.859 [2024-12-13 12:42:10.421234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.859 qpair failed and we were unable to recover it. 00:36:42.859 [2024-12-13 12:42:10.421423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.421456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.421724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.421757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.421958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.421991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.422181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.422212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.422344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.422376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.422614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.422645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.422913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.422950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.423129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.423161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.423268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.423299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.423489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.423520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.423683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.423714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.423887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.423919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.424185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.424216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.424401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.424432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.424537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.424575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.424773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.424814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.425007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.425038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.425275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.425306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.425425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.425456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.425566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.425596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.425812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.425845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.426029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.426061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.426300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.426331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.426516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.426547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.426737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.426768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.426952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.426984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.427158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.427188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.427429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.427459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.427748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.427791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.428007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.428039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.428279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.428310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.428495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.428526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.428643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.428674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.428851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.428885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.428999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.429030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.429220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.429252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.429441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.429472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.860 [2024-12-13 12:42:10.429580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.860 [2024-12-13 12:42:10.429611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.860 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.429873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.429907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.430080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.430111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.430350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.430381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.430514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.430545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.430734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.430766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.431022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.431054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.431155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.431187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.431389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.431420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.431606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.431637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.431896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.431928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.432118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.432149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.432319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.432350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.432453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.432484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.432598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.432629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.432752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.432803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.432988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.433021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.433204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.433234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.433423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.433460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.433679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.433710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.433897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.433930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.434034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.434065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.434252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.434283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.434460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.434491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.434628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.434660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.434793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.434826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.434965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.434997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.435239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.435270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.435451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.435483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.435614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.435645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.435764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.435802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.435983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.436015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.436192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.436224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.436476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.436507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.436703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.436931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.436963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.437139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.437170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.437343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.437374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.437632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.437664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.437780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.861 [2024-12-13 12:42:10.437821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.861 qpair failed and we were unable to recover it. 00:36:42.861 [2024-12-13 12:42:10.437992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.438023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.438259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.438291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.438406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.438437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.438554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.438586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.438715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.438747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.438862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.438901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.439163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.439195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.439296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.439326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.439560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.439592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.439790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.439823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.439938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.439969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.440079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.440111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.440250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.440282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.440397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.440428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.440668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.440699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.440891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.440925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.441054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.441086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.441290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.441321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.441576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.441608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.441812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.441845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.442087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.442118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.442406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.442437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.442676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.442707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.442946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.442979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.443244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.443275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.443470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.443501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.443625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.443656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.443928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.444124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.444155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.444284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.444316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.444440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.444471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.444668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.444700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.444894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.444927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.445202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.445234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.862 [2024-12-13 12:42:10.445415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.862 [2024-12-13 12:42:10.445446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.862 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.445699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.445732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.445928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.445960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.446170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.446201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.446446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.446478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.446608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.446639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.446757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.446796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.446904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.446936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.447043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.447074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.447264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.447296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.447476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.447507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.447700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.447732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.447927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.447964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.448210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.448241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.448370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.448401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.448525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.448556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.448748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.448779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.448985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.449017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.449125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.449157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.449395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.449427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.449666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.449697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.449873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.449906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.450172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.450204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.450460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.450492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.450753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.450790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.450975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.451007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.451278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.451309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.451489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.451521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.451791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.451828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.452010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.452041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.452233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.452264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.452427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.452458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.452642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.452674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.452948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.452982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.453089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.453119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.453359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.453390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.453492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.453522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.863 qpair failed and we were unable to recover it. 00:36:42.863 [2024-12-13 12:42:10.453792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.863 [2024-12-13 12:42:10.453824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.454007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.454039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.454169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.454207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.454463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.454494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.454664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.454696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.454893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.454927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.455165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.455196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.455441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.455472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.455646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.455677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.455921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.455954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.456074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.456106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.456273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.456304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.456439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.456470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.456654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.456686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.456821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.457118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.457150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.457317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.457387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.457537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.457573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.457839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.457872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.458050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.458251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.458283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.458452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.458483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.458601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.458632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.458806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.458839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.459075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.459107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.459317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.459348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.459600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.459631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.459823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.459855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.460055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.460086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.460259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.460299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.460562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.460593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.460889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.460922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.461042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.461073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.461333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.461365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.461604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.461636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.461826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.461859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.462048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.462079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.462264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.462296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.462598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.462629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.864 qpair failed and we were unable to recover it. 00:36:42.864 [2024-12-13 12:42:10.462821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.864 [2024-12-13 12:42:10.462854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.463121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.463153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.463342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.463373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.463493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.463671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.463703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.463903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.463935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.464117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.464149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.464322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.464354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.464542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.464573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.464678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.464710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.464907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.464940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.465049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.465081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.465271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.465452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.465484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.465585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.465615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.465800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.465833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.466022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.466053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.466239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.466309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.466577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.466645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.466910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.466946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.467238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.467269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.467542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.467574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.467817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.467849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.468048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.468079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.468337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.468368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.468554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.468584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.468793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.469007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.469039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.469157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.469188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.469370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.469401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.469601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.469633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.469902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.469935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.470175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.470206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.470385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.470417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.470673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.470704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.470874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.471123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.471155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.471416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.471447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.471621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.865 [2024-12-13 12:42:10.471841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.865 [2024-12-13 12:42:10.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.865 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.472133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.472164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.472420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.472452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.472713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.472744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.472886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.472917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.473058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.473105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.473318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.473349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.473585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.473617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.473886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.473920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.474126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.474157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.474343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.474375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.474638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.474670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.474806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.474839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.475051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.475082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.475328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.475359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.475481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.475513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.475771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.475814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.476054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.476086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.476354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.476385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.476679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.476712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.476836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.476868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.477110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.477141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.477247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.477279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.477393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.866 [2024-12-13 12:42:10.477424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.866 qpair failed and we were unable to recover it. 00:36:42.866 [2024-12-13 12:42:10.477538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.477569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.477816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.477849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.478042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.478074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.478242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.478274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.478463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.478495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.478737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.478768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.478965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.478998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.479182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.479213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.479387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.479424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.479633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.479844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.479877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.480055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.480087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.480272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.480304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.480489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.480520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.480624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.480656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.480895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.480928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.481099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.481130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.481405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.481436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.481621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.481653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.481835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.481869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.482105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.482136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.482318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.482349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.482554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.482598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.482725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.482758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.482969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.483001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.867 [2024-12-13 12:42:10.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.867 [2024-12-13 12:42:10.483144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.867 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.483330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.483361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.483532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.483562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.483810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.483850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.483982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.484014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.484272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.484303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.484510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.484542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.484712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.484744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.484929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.484962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.485145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.485178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.485360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.485402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.485612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.485644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.485833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.485866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.486045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.486076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.486251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.486282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.486472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.486504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.486679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.486711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.486902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.486934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.487043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.487075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.487264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.487295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.487408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.487440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.487649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.487681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.487802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.487851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.488036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.488069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.488311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.488344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.488491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.488677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.488709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.488902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.488935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.489190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.489222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.489433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.489465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.489686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.489718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.489833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.489865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.489996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.490026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.490205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.490237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.490413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.490445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.490709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.490741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.490928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.490962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.491075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.868 [2024-12-13 12:42:10.491112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.868 qpair failed and we were unable to recover it. 00:36:42.868 [2024-12-13 12:42:10.491298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.491331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.491455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.491488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.491599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.491630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.491814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.491848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.491985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.492017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.492125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.492157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.492400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.492432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.492709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.492741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.492943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.492976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.493222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.493253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.493435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.493467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.493660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.493692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.493812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.493851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.494042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.494074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.494310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.494342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.494546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.494579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.494819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.494853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.494963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.494994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.495115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.495146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.495321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.495353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.495479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.495511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.495721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.495752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.495938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.495973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.496159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.496191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.496380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.496412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.496530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.496563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.496689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.496722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.496839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.496871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.497136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.497168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.497371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.497403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.497528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.497560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.497677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.497709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.497893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.497926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.498064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.498096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.498347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.498378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.498613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.498644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.498768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.498807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.499058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.499090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.499275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.869 [2024-12-13 12:42:10.499306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.869 qpair failed and we were unable to recover it. 00:36:42.869 [2024-12-13 12:42:10.499487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.499523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.499710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.499742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.499928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.499961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.500158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.500190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.500366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.500398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.500718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.500749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.500986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.501021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.501212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.501483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.501515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.501683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.501914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.501947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.502136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.502167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.502359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.502391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.502568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.502599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.502799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.502839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.502976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.503009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.503224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.503255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.503387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.503418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.503615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.503647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.503825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.503860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.503966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.503998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.504135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.504167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.504279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.504310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.504485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.504517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.504721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.504754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.504965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.504998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.505119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.505152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.505396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.505435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:42.870 [2024-12-13 12:42:10.505551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.870 [2024-12-13 12:42:10.505584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:42.870 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.505823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.505859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.506082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.506114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.506228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.506261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.506534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.506568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.506686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.506719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.506858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.506892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.507015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.507047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.507174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.507207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.507467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.155 [2024-12-13 12:42:10.507499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.155 qpair failed and we were unable to recover it. 00:36:43.155 [2024-12-13 12:42:10.507633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.507666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.507914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.507951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.508125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.508158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.508299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.508331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.508570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.508602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.508819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.508854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.509162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.509194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.509305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.509336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.509524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.509556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.509688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.509720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.509824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.509857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.510080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.510112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.510237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.510270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.510486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.510518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.510640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.510672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.510805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.510839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.510971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.511004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.511131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.511163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.511336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.511368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.511557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.511590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.511763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.511814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.512016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.512049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.512248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.512280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.512385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.512417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.512599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.512631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.512811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.512845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.513037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.513069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.513179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.513212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.513395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.513427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.513550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.513588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.513772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.513816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.513934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.513965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.514072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.514104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.514276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.514308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.514437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.514468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.514653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.514684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.156 qpair failed and we were unable to recover it. 00:36:43.156 [2024-12-13 12:42:10.514802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.156 [2024-12-13 12:42:10.514834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.515037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.515070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.515308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.515340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.515460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.515493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.515667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.515699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.515880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.515915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.516088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.516121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.516321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.516353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.516576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.516608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.516744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.516775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.517048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.517081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.517329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.517360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.517542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.517574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.517743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.517775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.517984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.518016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.518287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.518319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.518491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.518523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.518630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.518663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.518858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.518891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.519074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.519107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.519293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.519329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.519521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.519553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.519665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.519697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.519901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.519933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.520051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.520083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.520260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.520293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.520563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.520594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.520791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.520824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.521065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.521096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.521299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.521330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.521524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.521555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.521680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.521711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.521882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.521915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.522035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.522072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.522254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.522285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.522461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.522492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.522616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.522647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.522829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.522862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.523054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.523086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.523270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.523304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.523474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.523506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.157 qpair failed and we were unable to recover it. 00:36:43.157 [2024-12-13 12:42:10.523643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.157 [2024-12-13 12:42:10.523674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.523864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.523898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.524069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.524099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.524201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.524234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.524356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.524388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.524582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.524614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.524742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.524775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.524968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.525001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.525113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.525145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.525384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.525415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.525723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.525754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.525887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.525920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.526108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.526138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.526259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.526291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.526417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.526448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.526639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.526671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.526858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.526890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.527093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.527125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.527306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.527337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.527535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.527775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.527818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.528008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.528041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.528222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.528254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.528492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.528524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.528653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.528685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.528804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.528837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.529038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.529070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.529307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.529339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.529445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.529477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.529662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.529694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.529874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.529907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.530154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.530186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.530377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.530409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.530598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.530630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.530755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.530799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.530970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.531002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.531182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.531214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.531390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.531422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.531546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.531578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.531768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.531812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.531985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.532017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.532134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.532166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.158 qpair failed and we were unable to recover it. 00:36:43.158 [2024-12-13 12:42:10.532337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.158 [2024-12-13 12:42:10.532370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.532548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.532580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.532699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.532730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.533047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.533082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.533295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.533333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.533453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.533484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.533656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.533689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.533898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.533931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.534053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.534085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.534262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.534294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.534496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.534529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.534634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.534666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.534796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.534829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.535095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.535127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.535355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.535388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.535628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.535660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.535828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.535862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.536052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.536084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.536286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.536318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.536719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.536751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.536897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.536936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.537126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.537158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.537286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.537317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.537526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.537559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.537731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.537764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.537986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.538019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.538208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.538240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.538353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.538385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.538575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.538607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.538722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.538754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.538960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.539001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.539141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.539173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.539358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.539390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.539665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.539697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.539871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.539905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.540019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.540052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.540288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.540320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.540577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.540610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.540789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.540821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.540947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.540979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.159 qpair failed and we were unable to recover it. 00:36:43.159 [2024-12-13 12:42:10.541152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.159 [2024-12-13 12:42:10.541184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.541356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.541388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.541590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.541622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.541859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.541893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.542072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.542105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.542291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.542322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.542503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.542534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.542789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.542822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.542934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.542967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.543160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.543191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.543305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.543338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.543442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.543475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.543595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.543626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.543909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.543942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.544062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.544094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.544293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.544324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.544521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.544553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.544759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.544807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.544980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.545013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.545190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.545223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.545337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.545368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.545548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.545581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.545752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.545794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.545982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.546014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.546272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.546304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.546430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.546462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.546566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.546598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.546866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.546900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.547019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.547051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.547176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.547208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.547472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.547527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.547717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.547750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.547884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.547920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.548036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.548069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.548257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.548288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.548411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.548444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.548666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.548698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.160 [2024-12-13 12:42:10.548878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.160 [2024-12-13 12:42:10.548912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.160 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.549034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.549067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.549256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.549287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.549467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.549500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.549689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.549721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.549906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.549940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.550055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.550088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.550308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.550341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.550536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.550568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.550776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.550816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.551014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.551047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.551153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.551184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.551427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.551460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.551594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.551627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.551905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.551938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.552058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.552273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.552305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.552478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.552510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.552694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.552726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.552906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.552939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.553136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.553169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.553355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.553386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.553577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.553609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.553800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.553832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.554009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.554041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.554309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.554340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.554452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.554484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.554595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.554628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.554810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.554843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.555033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.555064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.555238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.555270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.555391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.555422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.555664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.555696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.555864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.555903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.556105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.556137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.556390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.556422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.556662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.556693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.556902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.556935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.557180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.557211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.161 [2024-12-13 12:42:10.557451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.161 [2024-12-13 12:42:10.557482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.161 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.557689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.557721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.557894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.557927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.558117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.558150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.558271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.558303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.558490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.558522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.558772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.558819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.559019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.559051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.559237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.559269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.559457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.559488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.559617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.559649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.559907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.559940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.560049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.560081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.560257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.560290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.560426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.560458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.560575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.560607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.560795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.560828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.561101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.561327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.561359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.561571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.561604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.561839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.561871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.562149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.562187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.562367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.562400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.562616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.562648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.562930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.562965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.563212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.563245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.563351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.563383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.563567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.563600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.563852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.563886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.564068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.564110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.564300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.564332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.564557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.564589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.564834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.564867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.565131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.565163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.565351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.565383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.565601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.565633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.565749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.565792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.565963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.565996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.566125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.566157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.566447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.566479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.566668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.566700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.566826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.566858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.567040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.162 [2024-12-13 12:42:10.567072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.162 qpair failed and we were unable to recover it. 00:36:43.162 [2024-12-13 12:42:10.567259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.567291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.567464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.567496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.567600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.567809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.567842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.567970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.568002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.568110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.568147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.568333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.568365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.568470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.568502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.568604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.568635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.568748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.568791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.569035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.569067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.569210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.569241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.569364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.569396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.569570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.569602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.569791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.569824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.569933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.569965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.570081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.570113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.570322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.570353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.570485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.570516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.570760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.570802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.571069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.571101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.571303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.571335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.571460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.571492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.571751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.571797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.571921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.571955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.572163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.572195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.572297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.572329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.572443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.572475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.572714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.572746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.572977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.573011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.573145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.573179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.573447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.573479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.573617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.573648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.573847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.573881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.574068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.574100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.574279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.574312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.574514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.574547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.574834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.574867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.575108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.575140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.575381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.575413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.575598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.575630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.575822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.575855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.575987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.163 [2024-12-13 12:42:10.576020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.163 qpair failed and we were unable to recover it. 00:36:43.163 [2024-12-13 12:42:10.576271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.576302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.576489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.576521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.576755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.576797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.576974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.577006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.577195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.577228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.577357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.577387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.577571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.577603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.577798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.577831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.578018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.578050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.578156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.578186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.578388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.578420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.578536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.578568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.578812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.578846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.579025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.579057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.579235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.579267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.579519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.579550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.579813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.579845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.580107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.580141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.580323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.580354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.580533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.580565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.580688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.580720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.580965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.580997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.581180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.581336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.581368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.581606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.581637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.581897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.581930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.582186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.582218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.582390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.582421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.582595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.582627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.582829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.582861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.583003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.583040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.583282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.583313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.583499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.583530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.583719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.583751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.583985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.584017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.164 qpair failed and we were unable to recover it. 00:36:43.164 [2024-12-13 12:42:10.584212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.164 [2024-12-13 12:42:10.584244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.584483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.584515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.584706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.584738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.584990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.585024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.585145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.585427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.585460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.585745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.585776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.585921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.585953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.586063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.586097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.586279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.586311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.586417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.586447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.586620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.586653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.586853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.586886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.587078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.587110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.587230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.587261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.587431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.587463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.587644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.587675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.587871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.587904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.588034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.588066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.588254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.588285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.588469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.588500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.588790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.588824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.589011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.589044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.589242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.589274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.589462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.589493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.589797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.589909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.589940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.590132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.590164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.590275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.590305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.590440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.590471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.590692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.590724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.590972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.591005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.591179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.591210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.591332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.591364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.591549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.591581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.591711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.591744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.591953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.591991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.592157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.592406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.592657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.592689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.592803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.592835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.593045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.593078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.593195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.593227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.165 qpair failed and we were unable to recover it. 00:36:43.165 [2024-12-13 12:42:10.593334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.165 [2024-12-13 12:42:10.593364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.593488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.593520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.593685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.593873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.593905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.594034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.594240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.594272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.594445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.594476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.594648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.594681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.594800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.594834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.595015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.595047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.595222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.595253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.595423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.595455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.595712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.595744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.595944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.595977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.596079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.596111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.596300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.596333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.596529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.596560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.596746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.596778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.596895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.596925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.597032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.597064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.597192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.597229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.597475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.597508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.597699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.597732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.597919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.597950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.598187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.598220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.598320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.598351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.598587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.598618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.598750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.598792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.598967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.598999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.599185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.599217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.599397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.599428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.599604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.599636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.599743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.599775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.599970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.600002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.600208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.600248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.600435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.600467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.600583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.600615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.600794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.600828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.601065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.601097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.601278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.601310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.601574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.601605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.601859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.601892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.166 qpair failed and we were unable to recover it. 00:36:43.166 [2024-12-13 12:42:10.602084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.166 [2024-12-13 12:42:10.602115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.602317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.602627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.602658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.602863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.602896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.603076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.603107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.603351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.603390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.603539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.603572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.603756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.603795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.603917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.603949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.604127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.604159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.604360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.604392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.604503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.604534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.604718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.604751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.604949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.604984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.605116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.605148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.605328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.605360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.605470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.605501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.605613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.605645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.605768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.605813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.605946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.605978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.606215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.606247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.606423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.606568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.606600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.606796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.606828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.607027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.607059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.607322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.607353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.607534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.607566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.607761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.607815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.608061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.608092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.608216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.608248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.608362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.608393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.608589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.608621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.608740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.608777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.609052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.609085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.609259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.609290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.609474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.609508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.609682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.609714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.609821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.609851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.610184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.610216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.610339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.610370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.610552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.610583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.610827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.610860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.611040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.167 [2024-12-13 12:42:10.611071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.167 qpair failed and we were unable to recover it. 00:36:43.167 [2024-12-13 12:42:10.611258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.611290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.611408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.611440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.611683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.611715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.611911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.611947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.612074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.612104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.612251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.612283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.612404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.612435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.612626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.612657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.612931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.612965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.613090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.613123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.613330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.613362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.613542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.613574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.613813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.613847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.613967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.613999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.614211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.614242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.614438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.614469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.614592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.614630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.614873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.614913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.615160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.615192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.615398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.615429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.615559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.615590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.615775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.615830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.615944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.615976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.616148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.616180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.616363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.616395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.616569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.616599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.616801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.616835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.617075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.617107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.617284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.617316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.617424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.617455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.617587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.617620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.617832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.617865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.618074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.618105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.618342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.618375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.618547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.618578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.168 [2024-12-13 12:42:10.618688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.168 [2024-12-13 12:42:10.618720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.168 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.618892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.618925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.619162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.619193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.619325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.619356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.619478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.619510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.619703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.619735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.620024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.620057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.620242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.620272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.620463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.620507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.620697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.620731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.620915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.620949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.621080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.621112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.621296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.621329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.621435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.621466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.621666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.621699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.621993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.622152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.622183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.622307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.622339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.622508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.622540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.622710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.622742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.622958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.622994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.623173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.623211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.623319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.623350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.623474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.623505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.623631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.623663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.623778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.623822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.623934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.623966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.624078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.624109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.624373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.624404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.624572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.624603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.624709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.624739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.624882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.624915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.625205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.625237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.625350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.625381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.625516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.625548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.625724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.625756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.625884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.625916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.626138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.626169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.626342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.626373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.626546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.626577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.626766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.626817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.626944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.169 [2024-12-13 12:42:10.626977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.169 qpair failed and we were unable to recover it. 00:36:43.169 [2024-12-13 12:42:10.627162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.627192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.627360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.627392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.627576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.627801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.627833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.627934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.627966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.628096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.628128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.628305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.628336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.628618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.628649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.628825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.628858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.629139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.629170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.629378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.629410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.629583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.629614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.629801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.629833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.630010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.630041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.630285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.630316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.630550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.630695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.630727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.630907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.630942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.631164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.631327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.631497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.631529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.631715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.631747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.631913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.632019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.632051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.632221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.632253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.632433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.632465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.632595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.632627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.632740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.632772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.632979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.633012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.633249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.633280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.633464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.633496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.633664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.633793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.633825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.634020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.634052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.634227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.634259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.634377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.634409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.634579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.634611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.634806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.634849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.635026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.635059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.635181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.635211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.635338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.635369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.170 [2024-12-13 12:42:10.635554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.170 [2024-12-13 12:42:10.635586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.170 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.635772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.635828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.635932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.635963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.636071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.636101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.636374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.636405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.636592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.636637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.636844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.636879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.637071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.637103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.637276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.637307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.637477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.637508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.637710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.637742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.637968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.638002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.638239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.638270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.638524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.638555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.638672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.638704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.638826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.638859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.639100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.639131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.639267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.639298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.639490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.639521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.639644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.639676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.639914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.639946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.640121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.640152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.640325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.640356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.640564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.640595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.640767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.640809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.640999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.641031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.641295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.641325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.641563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.641807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.641840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.642013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.642044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.642236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.642267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.642504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.642536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.642844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.642883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.643093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.643124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.643313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.643345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.643514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.643545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.643719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.643750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.643870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.643903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.644090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.644121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.644253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.644284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.644550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.644581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.644682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.644713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.644887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.644920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.171 qpair failed and we were unable to recover it. 00:36:43.171 [2024-12-13 12:42:10.645035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.171 [2024-12-13 12:42:10.645066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.645255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.645286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.645405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.645436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.645684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.645716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.645887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.645921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.646109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.646140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.646247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.646279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.646493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.646524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.646698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.646729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.646970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.647002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.647263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.647295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.647465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.647497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.647700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.647731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.647977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.648010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.648156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.648189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.648318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.648349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.648530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.648562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.648741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.648773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.648963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.648995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.649211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.649242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.649435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.649467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.649580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.649611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.649804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.649837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.649965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.649996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.650186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.650217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.650400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.650431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.650639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.650670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.650860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.650893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.651075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.651106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.651374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.651404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.651751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.651832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.652015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.652054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.652171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.652203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.652386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.172 [2024-12-13 12:42:10.652417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.172 qpair failed and we were unable to recover it. 00:36:43.172 [2024-12-13 12:42:10.652608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.652641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.652775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.652820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.652924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.652956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.653144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.653176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.653354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.653386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.653522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.653553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.653667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.653699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.653876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.653910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.654034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.654065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.654193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.654226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.654411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.654443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.654623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.654655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.654779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.654836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.655015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.655046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.655224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.655443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.655733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.655765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.655965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.655998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.656168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.656390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.656421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.656682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.656715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.656892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.656925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.657175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.657207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.657408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.657440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.657628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.657660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.657832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.657864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.658055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.658086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.658267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.658299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.658552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.658582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.658764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.658813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.659007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.659039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.659212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.659243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.659365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.659396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.659564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.659595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.659800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.659832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.659950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.660115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.660153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.660392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.660424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.660552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.660583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.660713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.660746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.660950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.660986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.173 [2024-12-13 12:42:10.661236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.173 [2024-12-13 12:42:10.661268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.173 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.661393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.661425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.661538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.661570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.661759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.661803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.661908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.661939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.662203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.662234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.662444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.662477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.662650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.662683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.662940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.662977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.663108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.663141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.663329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.663360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.663551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.663583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.663691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.663722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.663997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.664029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.664141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.664174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.664307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.664339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.664524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.664556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.664672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.664704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.664940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.664973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.665093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.665125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.665389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.665421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.665533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.665566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.665840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.665873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.665992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.666024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.666226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.666258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.666387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.666418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.666649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.666681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.666808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.666848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.666972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.667003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.667241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.667272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.667393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.667425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.667669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.667700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.667814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.667847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.667981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.668013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.668115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.668145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.668317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.668354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.668615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.668646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.668856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.668889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.669009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.669040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.669158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.669190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.669291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.669321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.669496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.669527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.669698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.174 [2024-12-13 12:42:10.669730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.174 qpair failed and we were unable to recover it. 00:36:43.174 [2024-12-13 12:42:10.669862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.669895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.670001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.670033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.670297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.670329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.670453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.670483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.670720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.670752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.670969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.671005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.671195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.671227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.671414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.671446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.671571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.671603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.671709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.671742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.671881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.671915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.672149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.672181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.672296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.672329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.672569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.672601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.672817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.673103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.673137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.673338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.673373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.673509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.673541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.673725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.673757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.674029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.674072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.674204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.674236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.674408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.674440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.674576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.674607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.674727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.674759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.674979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.675011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.675204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.675235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.675412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.675443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.675635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.675666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.675849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.675885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.676065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.676096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.676218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.676250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.676366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.676397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.676506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.676538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.676730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.676763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.676986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.677018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.677226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.677258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.677442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.677473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.677613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.677644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.677832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.677864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.677983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.678145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.678177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.175 qpair failed and we were unable to recover it. 00:36:43.175 [2024-12-13 12:42:10.678343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.175 [2024-12-13 12:42:10.678375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.678556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.678588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.678701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.678734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.678985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.679017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.679144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.679176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.679314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.679346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.679531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.679562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.679752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.679793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.680016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.680119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.680150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.680264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.680295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.680404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.680435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.680647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.680678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.680878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.680911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.681022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.681053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.681236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.681268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.681373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.681404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.681618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.681802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.681840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.682017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.682049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.682227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.682258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.682428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.682460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.682573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.682604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.682710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.682742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.682925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.682959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.683146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.683177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.683388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.683421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.683669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.683701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.683892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.683926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.684103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.684133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.684320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.684353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.684471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.684503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.684720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.684752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.684898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.684937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.685062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.685094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.685210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.685242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.685442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.176 [2024-12-13 12:42:10.685474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.176 qpair failed and we were unable to recover it. 00:36:43.176 [2024-12-13 12:42:10.685588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.685620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.685740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.685772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.686062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.686094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.686284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.686315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.686418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.686449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.686562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.686594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.686838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.686874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.686980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.687012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.687191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.687223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.687335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.687367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.687458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.687491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.687608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.687640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.687849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.687883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.688093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.688125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.688253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.688285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.688400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.688432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.688550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.688582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.688695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.688726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.688875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.688908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.689080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.689112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.689322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.689353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.689536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.689573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.689687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.689719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.689845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.689878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.690062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.690093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.690354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.690386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.690586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.690618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.690756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.690800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.690931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.690964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.691105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.691136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.691262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.691298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.691511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.691541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.691657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.691689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.691893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.691927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.692097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.692128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.692244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.692276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.692512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.692544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.692650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.692681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.692798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.692832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.692964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.692996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.693181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.177 [2024-12-13 12:42:10.693212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.177 qpair failed and we were unable to recover it. 00:36:43.177 [2024-12-13 12:42:10.693451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.693482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.693593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.693625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.693800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.693832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.693952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.693983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.694160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.694192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.694306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.694337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.694440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.694471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.694610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.694642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.694826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.694869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.694979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.695010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.695131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.695162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.695335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.695367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.695491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.695522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.695644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.695675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.695869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.695902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.696082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.696113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.696230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.696262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.696452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.696482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.696609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.696640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.696868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.696900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.697082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.697119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.697301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.697333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.697439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.697470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.697585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.697616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.697794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.697827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.698001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.698032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.698155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.698187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.698315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.698347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.698544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.698575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.698709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.698741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.698926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.698963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.699090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.699121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.699235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.699266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.699440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.699622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.699802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.699835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.700010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.700043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.700146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.700176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.700283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.700314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.700500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.178 [2024-12-13 12:42:10.700531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.178 qpair failed and we were unable to recover it. 00:36:43.178 [2024-12-13 12:42:10.700698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.700729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.700910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.700943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.701047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.701078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.701248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.701278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.701415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.701446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.701572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.701603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.701801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.701834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.701970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.702002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.702115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.702145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.702263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.702294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.702484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.702515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.702707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.702738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.702872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.702908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.703076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.703108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.703355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.703385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.703493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.703524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.703722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.703754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.704002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.704038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.704278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.704309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.704434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.704465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.704603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.704640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.704760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.704803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.705043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.705074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.705250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.705282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.705386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.705416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.705548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.705668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.705700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.705910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.705944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.706062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.706094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.706217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.706416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.706447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.706569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.706599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.706714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.706745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.706864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.706897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.707025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.707057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.707173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.707205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.707377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.707408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.707597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.707629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.707842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.707874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.708044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.708075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.708273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.708304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.708427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.708457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.179 qpair failed and we were unable to recover it. 00:36:43.179 [2024-12-13 12:42:10.708559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.179 [2024-12-13 12:42:10.708591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.708758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.708810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.708924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.708954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.709074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.709105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.709236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.709267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.709396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.709428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.709539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.709570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.709752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.709794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.709968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.710000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.710291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.710323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.710438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.710470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.710662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.710693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.710913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.711090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.711123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.711233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.711265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.711504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.711537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.711657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.711688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.711798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.711831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.712027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.712064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.712178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.712209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.712327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.712359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.712486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.712517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.712710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.712741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.712870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.712908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.713880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.713912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.714042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.714073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.714331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.714363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.714482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.714515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.714756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.714795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.715946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.715977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.180 [2024-12-13 12:42:10.716185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.180 [2024-12-13 12:42:10.716216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.180 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.716390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.716421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.716597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.716628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.716762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.716801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.716912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.716943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.717044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.717075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.717187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.717218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.717389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.717420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.717635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.717665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.717780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.717828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.717941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.717972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.718107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.718139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.718252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.718283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.718389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.718420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.718551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.718583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.718767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.718808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.718926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.718963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.719205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.719238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.719341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.719371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.719539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.719571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.719695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.719727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.719857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.719889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.720068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.720277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.720309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.720484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.720515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.720621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.720652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.720890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.720924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.721037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.721068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.721254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.721285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.721393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.721424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.721534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.721566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.721696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.721727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.721907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.721940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.722047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.722078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.181 qpair failed and we were unable to recover it. 00:36:43.181 [2024-12-13 12:42:10.722313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.181 [2024-12-13 12:42:10.722345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.722529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.722559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.722742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.722774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.722899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.722932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.723152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.723182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.723361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.723394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.723518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.723550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.723666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.723698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.723887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.723920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.724140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.724172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.724293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.724325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.724435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.724466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.724636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.724668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.724769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.724813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.724993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.725152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.725290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.725434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.725571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.725723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.725905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.725957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.726172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.726207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.726342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.726393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.726637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.726671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.726885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.726920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.727054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.727093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.727291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.727325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.727468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.727504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.727644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.727685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.727894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.727929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.728074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.728108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.728300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.728333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.728487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.728521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.728643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.728680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.728802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.728840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.728960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.728995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.729189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.729221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.729343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.729374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.729563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.729595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.729700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.729731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.729856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.729889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.730147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.730178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.182 qpair failed and we were unable to recover it. 00:36:43.182 [2024-12-13 12:42:10.730353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.182 [2024-12-13 12:42:10.730385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.730509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.730540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.730654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.730686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.730819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.730853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.731033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.731064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.731373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.731404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.731678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.731719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.731971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.732005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.732120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.732152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.732290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.732322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.732446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.732478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.732657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.732690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.732812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.732851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.732973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.733131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.733427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.733590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.733725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.733893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.733932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.734030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.734061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.734251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.734284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.734401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.734433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.734562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.734592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.734702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.734733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.734845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.734878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.735034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.735186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.735342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.735500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.735646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.735806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.735983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.736015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.736135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.736169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.736343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.736377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.736493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.736525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.736722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.736753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.736963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.737007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.183 [2024-12-13 12:42:10.737158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.183 [2024-12-13 12:42:10.737206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.183 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.737355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.737387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.737592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.737624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.737822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.737857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.737985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.738132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.738272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.738410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.738562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.738723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.738897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.738930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.739043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.739075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.739182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.739214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.739458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.739490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.739606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.739638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.739820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.739855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.739973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.740124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.740262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.740461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.740616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.740919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.740949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.741074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.741106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.741220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.741252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.741366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.741397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.741589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.741621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.741731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.741763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.741880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.741911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.742022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.742054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.742224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.742256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.742373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.742405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.742530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.742698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.742746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.742871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.742900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.743102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.743135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.743314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.743339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.743435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.743459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.743669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.743694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.743816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.743844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.743946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.743970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.744070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.744095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.744204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.184 [2024-12-13 12:42:10.744230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.184 qpair failed and we were unable to recover it. 00:36:43.184 [2024-12-13 12:42:10.744332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.744357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.744448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.744471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.744579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.744604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.744693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.744717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.744812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.744838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.745918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.745972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.746156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.746189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.746294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.746326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.746465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.746499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.746605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.746638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.746749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.746791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.747013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.747046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.747305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.747338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.747472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.747512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.747687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.747719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.747899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.747933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.749320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.749374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.749639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.749673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.749878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.749913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.750177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.750209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.750323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.750355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.750482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.750514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.750641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.750673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.750851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.750884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.750986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.751019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.751141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.751173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.751345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.751377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.751535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.751668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.751702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.185 [2024-12-13 12:42:10.751879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.185 [2024-12-13 12:42:10.751913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.185 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.752156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.752188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.752342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.752525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.752557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.752666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.752699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.752826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.752860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.753081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.753114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.753216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.753248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.753354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.753385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.753559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.753591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.753726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.753777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.753980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.754274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.754407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.754546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.754689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.754828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.754962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.754993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.755098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.755127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.755305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.755336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.755452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.755481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.755651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.755682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.755805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.755839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.756013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.756045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.756215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.756246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.756366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.756397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.756588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.756621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.756805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.756839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.757023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.757055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.757227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.757259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.757427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.757458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.757625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.757657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.757764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.757807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.757919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.757950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.758128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.758160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.758330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.758363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.758485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.758516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.758622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.758656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.758789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.758839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.758992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.759025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.759285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.759319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.759528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.759698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.186 [2024-12-13 12:42:10.759729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.186 qpair failed and we were unable to recover it. 00:36:43.186 [2024-12-13 12:42:10.759882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.759918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.760027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.760061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.760199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.760234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.760353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.760385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.760494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.760526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.760639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.760670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.760843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.760877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.761054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.761086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.761258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.761291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.761463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.761495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.761664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.761696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.761876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.761910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.762176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.762209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.762444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.762476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.762581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.762612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.762798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.762831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.763089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.763121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.763246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.763277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.763387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.763418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.763523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.763555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.763800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.763833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.763975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.764007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.764214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.764253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.764439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.764471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.764644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.764676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.766063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.766115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.766257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.766289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.766470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.766502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.766694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.766727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.766969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.767003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.767195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.767227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.767357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.767388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.767507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.767539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.767668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.767701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.767885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.767920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.768088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.768239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.768445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.768594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.768733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.768877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.768993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.187 [2024-12-13 12:42:10.769023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.187 qpair failed and we were unable to recover it. 00:36:43.187 [2024-12-13 12:42:10.769128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.769158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.769625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.769662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.769802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.769835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.770019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.770052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.770227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.770260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.770434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.770466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.770674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.770705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.770908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.770942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.771131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.771164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.771291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.771323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.771512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.771545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.771725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.771757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.771952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.771985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.772175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.772206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.772326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.772357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.772534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.772750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.772793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.772898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.772931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.773054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.773085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.773322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.773353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.773532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.773567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.773768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.773815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.773924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.773955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.774074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.774105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.774358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.774389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.774573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.774605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.774720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.774752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.774873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.774904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.775016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.775049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.775230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.775262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.775445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.775476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.775588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.775620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.775736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.775768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.775924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.775955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.776155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.776185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.776352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.776380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.776476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.776503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.776603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.776630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.776727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.776756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.776867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.188 [2024-12-13 12:42:10.776896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.188 qpair failed and we were unable to recover it. 00:36:43.188 [2024-12-13 12:42:10.777103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.777131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.777252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.777280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.777446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.777475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.777658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.777688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.777791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.777819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.777919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.777944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.778134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.778276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.778403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.778601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.778728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.778876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.778991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.779019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.779185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.779214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.779315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.779341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.779640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.779670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.779764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.779803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.779999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.780165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.780192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.780374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.780401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.780519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.780548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.780656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.780686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.780853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.780885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.781949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.781976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.782081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.782110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.782319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.782348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.782453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.782478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.782664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.782693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.782866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.782896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.783010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.783037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.783219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.189 [2024-12-13 12:42:10.783249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.189 qpair failed and we were unable to recover it. 00:36:43.189 [2024-12-13 12:42:10.783354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.783382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.783545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.783572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.783696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.783725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.783890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.783921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.784839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.784867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.785044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.785077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.785181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.785210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.785323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.785351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.785525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.785723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.785752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.785861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.785890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.786072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.786100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.786215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.786243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.786435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.786467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.786569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.786596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.786760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.786796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.786904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.786932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.787136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.787164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.787340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.787368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.787549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.787578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.787712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.787742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.787851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.787881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.788912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.788940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.789101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.789130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.789292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.789322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.789436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.789464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.789703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.789738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.789920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.789950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.790058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.790087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.790263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.190 [2024-12-13 12:42:10.790290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.190 qpair failed and we were unable to recover it. 00:36:43.190 [2024-12-13 12:42:10.790394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.790422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.790533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.790561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.790684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.790712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.790831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.790862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.790970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.790998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.791108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.791137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.791308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.791335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.791454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.791483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.791591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.791619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.791718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.791747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.791885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.791915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.792042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.792184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.792306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.792437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.792703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.792831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.792995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.793025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.793192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.793221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.793381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.793410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.793590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.793618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.793731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.793761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.793870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.793897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.794079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.794108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.794285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.794315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.794441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.794611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.794639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.794888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.794917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.795107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.795137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.795315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.795344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.795554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.795726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.795754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.795940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.795969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.796152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.796180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.796344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.796373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.796564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.796797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.796828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.796958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.796991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.797116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.797143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.797323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.797353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.797465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.191 [2024-12-13 12:42:10.797493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.191 qpair failed and we were unable to recover it. 00:36:43.191 [2024-12-13 12:42:10.797614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.797643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.797802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.797832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.797941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.798075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.798104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.798206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.798235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.798332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.798362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.798553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.798582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.798672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.798701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.798865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.798896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.799083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.799113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.799302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.799330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.799505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.799533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.799830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.799861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.800028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.800057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.800288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.800317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.800597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.800625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.800731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.800760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.800965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.800993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.801105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.801132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.801312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.801340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.801615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.801644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.801771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.801807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.801929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.801958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.802070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.802109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.802275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.802302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.802485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.802514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.802764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.802803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.802981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.803010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.803189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.803218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.803448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.803476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.803602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.803630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.803738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.803765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.804007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.804037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.804136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.804165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.804283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.804310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.804440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.804468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.804596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.804624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.804835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.804907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.805038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.805074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.805250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.805283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.805468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.805500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.805616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.192 [2024-12-13 12:42:10.805650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.192 qpair failed and we were unable to recover it. 00:36:43.192 [2024-12-13 12:42:10.805769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.805821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.806075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.806107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.806236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.806268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.806440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.806472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.806650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.806683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.806908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.807033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.807064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.807315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.807347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.807559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.807601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.807896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.807929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.808044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.808077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.808270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.808303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.808427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.808459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.808581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.808614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.808800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.808955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.808988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.809162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.809195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.809340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.809372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.809486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.809518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.809655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.809687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.809865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.809899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.810033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.810064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.810254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.810286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.810408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.810441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.810544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.810576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.810762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.810833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.810962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.810993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.811177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.811209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.811403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.811434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.811536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.811568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.811748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.811779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.811899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.811930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.812131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.812164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.812298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.193 [2024-12-13 12:42:10.812330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.193 qpair failed and we were unable to recover it. 00:36:43.193 [2024-12-13 12:42:10.812446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.812477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.812650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.812705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.812875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.812911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.813107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.813139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.813260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.813293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.813421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.813456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.813593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.813625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.813745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.813779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.813970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.814000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.814148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.814178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.814383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.814415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.814659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.814691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.814821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.814856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.815046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.815075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.815315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.815353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.815545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.815576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.815703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.815740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.815892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.815926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.816049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.816085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.816278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.816308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.816428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.816461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.816590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.816628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.816756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.816805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.816937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.816972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.817187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.817219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.817346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.817382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.817560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.817590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.817770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.817811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.817989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.818019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.818148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.818185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.818369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.818400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.818588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.818617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.818754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.818801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.818927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.818963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.819093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.819124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.819253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.819289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.819469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.819499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.819693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.819724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.819994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.820030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.820278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.820310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.820434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.820470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.820653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.194 [2024-12-13 12:42:10.820723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.194 qpair failed and we were unable to recover it. 00:36:43.194 [2024-12-13 12:42:10.820890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.820928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.821045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.821078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.821321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.821353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.821468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.821500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.821608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.821640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.821767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.821819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.822038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.822189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.822338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.822554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.822712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.822864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.822984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.823026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.823157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.823190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.823291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.823324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.823527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.823559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.823666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.823704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.823805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.823991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.824024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.824203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.824234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.824348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.824380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.824558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.824594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.824769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.824845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.824981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.825013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.825124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.825156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.825276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.825309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.825516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.825549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.825685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.825720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.825872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.825908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.826041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.826074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.826207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.826239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.826482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.826514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.826641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.826674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.826858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.826894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.827026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.827060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.827185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.827217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.827330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.827362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.827542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.827717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.827752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.827886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.827932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.828056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.828087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.828270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.828301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.195 [2024-12-13 12:42:10.828417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.195 [2024-12-13 12:42:10.828449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.195 qpair failed and we were unable to recover it. 00:36:43.196 [2024-12-13 12:42:10.828556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.196 [2024-12-13 12:42:10.828586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.196 qpair failed and we were unable to recover it. 00:36:43.196 [2024-12-13 12:42:10.828756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.196 [2024-12-13 12:42:10.828802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.196 qpair failed and we were unable to recover it. 00:36:43.196 [2024-12-13 12:42:10.828929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.196 [2024-12-13 12:42:10.828961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.196 qpair failed and we were unable to recover it. 00:36:43.196 [2024-12-13 12:42:10.829082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.196 [2024-12-13 12:42:10.829113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.196 qpair failed and we were unable to recover it. 00:36:43.196 [2024-12-13 12:42:10.829319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.829350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.829468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.829499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.829670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.829702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.829812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.829844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.829951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.829982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.830090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.830120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.830242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.830273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.830382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.830413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.830530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.830561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.830679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.830711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.830950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.830985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.831093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.831124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.831245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.831279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.831395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.831425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.831536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.831565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.831687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.831829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.832029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.832059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.832233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.832265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.481 [2024-12-13 12:42:10.832434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.481 [2024-12-13 12:42:10.832471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.481 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.832655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.832685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.832805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.832839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.832954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.832985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.833100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.833130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.833244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.833276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.833391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.833422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.833537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.833567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.833685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.833728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.833830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.833854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.834010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.834034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.834188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.834210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.834363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.834384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.834545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.834566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.834721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.834744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.834915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.834938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.835894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.835982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.836090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.836200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.836515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.836720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.836957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.836988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.837163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.837190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.837366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.837396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.837494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.837525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.837632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.837660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.837824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.837857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.838055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.838159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.838186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.838356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.838385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.838502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.838530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.482 qpair failed and we were unable to recover it. 00:36:43.482 [2024-12-13 12:42:10.838720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.482 [2024-12-13 12:42:10.838749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.838892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.838923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.839109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.839139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.839247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.839275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.839480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.839586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.839613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.839717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.839743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.839871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.839901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.840135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.840165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.840397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.840426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.840593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.840621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.840730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.840759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.840944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.840974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.841081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.841110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.841291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.841320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.841489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.841518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.841682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.841876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.841907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.842072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.842101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.842209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.842238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.842428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.842456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.842636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.842665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.842770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.842815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.843908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.843935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.844168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.844197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.844313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.844341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.844452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.844480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.844666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.844696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.844930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.844961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.845143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.845172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.845348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.845377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.845536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.845564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.483 [2024-12-13 12:42:10.845748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.483 [2024-12-13 12:42:10.845777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.483 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.845912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.845942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.846113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.846144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.846323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.846353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.846466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.846498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.846741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.846774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.846981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.847014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.847182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.847214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.847319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.847351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.847464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.847496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.847672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.847704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.847952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.847987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.848174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.848205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.848333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.848364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.848627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.848658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.848830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.848864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.849132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.849164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.849281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.849314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.849499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.849532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.849712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.849743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.849932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.849964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.850173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.850205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.850406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.850438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.850565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.850598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.850704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.850734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.850922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.850956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.851127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.851159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.851295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.851327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.851520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.851551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.851682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.851715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.851884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.851925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.852145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.852177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.852364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.852396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.852605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.852637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.852770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.852819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.853082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.853114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.853239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.853271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.853484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.853516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.853649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.853680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.484 [2024-12-13 12:42:10.853836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.484 [2024-12-13 12:42:10.853872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.484 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.854043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.854075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.854312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.854344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.854524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.854555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.854728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.854759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.854902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.854935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.855138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.855169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.855358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.855390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.855555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.855587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.855756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.855799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.855973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.856005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.856185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.856216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.856439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.856471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.856659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.856690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.856823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.856857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.857035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.857066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.857278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.857310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.857440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.857471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.857673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.857706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.857825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.857857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.858040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.858072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.858202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.858234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.858406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.858437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.858681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.858714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.859029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.859062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.859239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.859270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.859474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.859506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.859704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.859736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.859871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.859903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.860036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.860068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.860189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.860221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.860478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.860514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.860751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.860794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.861057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.861089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.485 qpair failed and we were unable to recover it. 00:36:43.485 [2024-12-13 12:42:10.861280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.485 [2024-12-13 12:42:10.861312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.861487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.861518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.861636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.861668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.861808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.861842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.861975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.862006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.862129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.862162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.862336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.862367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.862550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.862581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.862693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.862725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.862919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.862953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.863207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.863240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.863383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.863415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.863602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.863633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.863874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.863908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.864196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.864228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.864344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.864376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.864548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.864579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.864711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.864741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.864867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.864901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.865035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.865065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.865172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.865202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.865321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.865353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.865557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.865587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.865702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.865733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.865862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.865895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.866078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.866110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.866229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.866260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.866381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.866413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.866604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.866635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.866758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.866812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.867005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.867037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.867246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.867278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.867397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.867429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.867607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.867639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.867793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.867825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.867948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.867980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.868167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.868200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.868317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.868353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.868459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.486 [2024-12-13 12:42:10.868491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.486 qpair failed and we were unable to recover it. 00:36:43.486 [2024-12-13 12:42:10.868603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.868633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.868831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.868878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.869005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.869038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.869157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.869187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.869399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.869430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.869532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.869563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.869759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.869799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.869929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.869960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.870158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.870188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.870298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.870329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.870513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.870544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.870665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.870696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.870838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.870871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.871110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.871142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.871269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.871300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.871488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.871519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.871692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.871722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.871848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.871880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.872143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.872174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.872289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.872321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.872513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.872543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.872664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.872694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.872803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.872836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.872962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.873162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.873193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.873326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.873359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.873489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.873520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.873634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.873666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.873857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.873890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.874125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.874156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.874432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.874464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.874585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.874615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.874794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.874826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.874997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.875027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.875135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.875166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.875334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.875364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.875538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.875569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.875685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.875717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.875901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.487 [2024-12-13 12:42:10.875939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.487 qpair failed and we were unable to recover it. 00:36:43.487 [2024-12-13 12:42:10.876118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.876315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.876346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.876518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.876551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.876734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.876764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.877015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.877046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.877230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.877261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.877433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.877464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.877641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.877673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.877804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.877836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.877963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.877993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.878104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.878138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.878317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.878347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.878468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.878498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.878638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.878671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.878817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.878848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.878967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.878999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.879126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.879158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.879339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.879371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.879609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.879641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.879802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.879993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.880025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.880143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.880173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.880363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.880394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.880505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.880535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.880715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.880747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.880957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.880991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.881101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.881132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.881245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.881277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.881396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.881427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.881535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.881565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.881750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.881792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.881933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.881964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.882091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.882122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.882247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.882279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.882422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.882452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.882620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.882651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.882834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.882867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.882969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.883000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.883103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.883133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.488 qpair failed and we were unable to recover it. 00:36:43.488 [2024-12-13 12:42:10.883237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.488 [2024-12-13 12:42:10.883275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.883397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.883428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.883602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.883634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.883843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.883876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.884080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.884226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.884369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.884509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.884727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.884879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.884984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.885014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.885117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.885150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.885256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.885286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.885466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.885499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.885609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.885640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.885824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.885856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.885984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.886015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.886271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.886303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.886405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.886436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.886611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.886642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.886847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.886879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.887047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.887078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.887367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.887547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.887579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.887707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.887740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.887921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.887953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.888153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.888184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.888423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.888492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.888705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.888740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.888874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.888909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.889162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.889194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.889384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.889416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.889679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.889712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.889949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.889983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.890164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.890195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.890311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.890342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.890523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.890555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.890824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.890858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.891034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.891066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.489 [2024-12-13 12:42:10.891205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.489 [2024-12-13 12:42:10.891236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.489 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.891362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.891402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.891621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.891652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.891842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.891876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.892004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.892035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.892170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.892201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.892302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.892334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.892511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.892543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.892725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.892756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.892893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.892925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.893047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.893078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.893262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.893294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.893407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.893438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.893558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.893590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.893776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.893823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.893957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.893990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.894162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.894194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.894318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.894349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.894532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.894563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.894667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.894699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.894812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.894846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.895033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.895064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.895337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.895369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.895607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.895639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.895742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.895773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.895915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.896104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.896135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.896319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.896351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.896544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.896576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.896699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.896731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.896859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.896891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.896991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.897022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.897269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.897301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.897483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.897514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.897715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.897747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.490 qpair failed and we were unable to recover it. 00:36:43.490 [2024-12-13 12:42:10.897877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.490 [2024-12-13 12:42:10.897910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.898027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.898059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.898180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.898211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.898415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.898446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.898588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.898620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.898758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.898805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.898993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.899031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.899225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.899257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.899511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.899542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.899791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.899824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.899946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.899977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.900092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.900124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.900297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.900328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.900520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.900551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.900672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.900703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.901100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.901131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.901260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.901291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.901553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.901702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.901733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.902002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.902035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.902169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.902200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.902373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.902403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.902577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.902608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.902733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.902764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.902958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.902990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.903184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.903214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.903338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.903368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.903553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.903585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.903690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.903720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.903888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.903919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.904044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.904075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.904260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.904292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.904452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.904522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.904685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.904720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.904998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.905032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.905317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.905349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.905532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.905563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.905810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.905844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.491 [2024-12-13 12:42:10.906035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.491 [2024-12-13 12:42:10.906067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.491 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.906184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.906215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.906324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.906354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.906470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.906502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.906615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.906646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.906857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.907102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.907133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.907305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.907336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.907447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.907478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.907684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.907715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.907843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.907875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.908062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.908214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.908243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.908450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.908483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.908671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.908701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.908820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.908854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.909095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.909127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.909388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.909419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.909564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.909596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.909838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.909872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.910171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.910202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.910374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.910411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.910532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.910563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.910735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.910765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.910997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.911134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.911278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.911421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.911581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.911737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.911880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.911912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.912086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.912117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.912226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.912258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.912449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.912480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.912596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.912627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.912820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.912852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.912965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.912996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.913101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.913395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.913425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.913600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.913630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.492 [2024-12-13 12:42:10.913811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.492 [2024-12-13 12:42:10.913844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.492 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.914020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.914051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.914178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.914209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.914378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.914410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.914647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.914677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.914928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.914959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.915127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.915158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.915400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.915432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.915558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.915595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.915759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.915804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.915937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.915968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.916145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.916174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.916387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.916418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.916526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.916557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.916743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.916774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.916967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.916998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.917178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.917209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.917334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.917365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.917538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.917570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.917687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.917717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.917839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.917872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.918040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.918071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.918287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.918356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.918588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.918624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.918765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.918804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.918924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.918955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.919129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.919160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.919333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.919364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.919569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.919687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.919719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.919837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.919869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.919974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.920005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.920245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.920277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.920443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.920625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.920657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.920770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.920821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.920947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.920978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.921076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.921106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.921281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.921312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.921414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.493 [2024-12-13 12:42:10.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.493 qpair failed and we were unable to recover it. 00:36:43.493 [2024-12-13 12:42:10.921620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.921652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.921777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.921819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.922057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.922089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.922353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.922384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.922574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.922606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.922803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.922836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.922944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.922975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.923087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.923119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.923300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.923330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.923438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.923469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.923659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.923691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.923829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.923862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.924056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.924086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.924203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.924233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.924429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.924461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.924593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.924624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.924799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.924831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.924946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.924978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.925113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.925144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.925252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.925283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.925411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.925444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.925552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.925582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.925795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.925839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.926111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.926145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.926260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.926292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.926666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.926697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.926844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.927037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.927069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.927192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.927224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.927336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.927368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.927491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.927523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.927691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.927723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.927849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.927885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.494 [2024-12-13 12:42:10.928005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.494 [2024-12-13 12:42:10.928038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.494 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.928215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.928256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.928393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.928425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.928663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.928695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.928871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.928904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.929015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.929047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.929163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.929194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.929381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.929413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.929521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.929553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.929719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.929750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.929941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.929974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.930100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.930132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.930251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.930282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.930450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.930482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.930730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.930762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.930961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.930994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.931099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.931131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.931255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.931290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.931424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.931455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.931628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.931660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.931850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.931886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.932074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.932106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.932308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.932424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.932456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.932568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.932606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.932810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.932843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.932968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.933000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.933193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.933225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.933403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.933475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.933674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.933716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.933936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.933972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.934091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.934123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.934311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.934342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.934444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.934474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.934659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.934691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.934813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.934847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.935035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.935066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.935184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.935215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.935452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.935484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.935601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.495 [2024-12-13 12:42:10.935632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.495 qpair failed and we were unable to recover it. 00:36:43.495 [2024-12-13 12:42:10.935765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.935806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.935936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.935967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.936091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.936122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.936224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.936254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.936357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.936387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.936562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.936593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.936696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.936727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.936948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.936982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.937180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.937212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.937380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.937410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.937547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.937578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.937680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.937712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.937910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.937943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.938064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.938095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.938339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.938370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.938582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.938613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.938734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.938764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.938963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.938996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.939261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.939292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.939404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.939435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.939565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.939615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.939838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.939871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.940046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.940076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.940196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.940229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.940437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.940467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.940653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.940686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.940799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.940832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.940958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.940989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.941253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.941291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.941424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.941455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.941577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.941608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.941734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.941765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.941953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.941985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.942098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.942129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.942249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.942280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.942406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.942437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.942618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.942649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.942840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.942874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.942987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.496 [2024-12-13 12:42:10.943018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.496 qpair failed and we were unable to recover it. 00:36:43.496 [2024-12-13 12:42:10.943190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.943222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.943334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.943365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.943487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.943519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.943716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.943747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.944001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.944033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.944228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.944259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.944379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.944410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.944512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.944545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.944670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.944701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.944835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.944868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.945048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.945079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.945261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.945292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.945572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.945603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.945714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.945745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.945864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.945897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.946092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.946123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.946255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.946285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.946413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.946445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.946564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.946595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.946797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.946831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.946959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.946990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.947112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.947144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.947327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.947358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.947527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.947558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.947727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.947758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.947929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.947961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.948135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.948167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.948348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.948379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.948560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.948591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.948805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.948844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.948968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.948999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.949180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.949209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.949479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.949509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.949689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.949718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.949910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.949942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.950171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.950201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.950315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.950345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.950600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.950629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.497 [2024-12-13 12:42:10.950935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.497 qpair failed and we were unable to recover it. 00:36:43.497 [2024-12-13 12:42:10.951140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.951169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.951362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.951392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.951630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.951659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.951802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.951833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.951959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.951989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.952118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.952148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.952332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.952362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.952510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.952541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.952792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.952822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.952949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.952979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.953099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.953129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.953306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.953336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.953549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.953832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.953864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.954084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.954116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.954246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.954276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.954386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.954416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.954630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.954664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.954851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.954882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.955129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.955164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.955291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.955322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.955502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.955547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.955743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.955779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.955996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.956031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.956209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.956242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.956370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.956402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.956584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.956615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.956726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.956757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.956888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.956921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.957069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.957115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.957292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.957330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.957436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.957467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.957600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.957634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.957737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.957775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.957905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.957938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.958041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.958073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.958193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.958224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.958353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.958384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.958562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.498 [2024-12-13 12:42:10.958594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.498 qpair failed and we were unable to recover it. 00:36:43.498 [2024-12-13 12:42:10.958712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.958743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.958884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.958918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.959028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.959060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.959171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.959203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.959381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.959412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.959548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.959580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.959703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.959735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.959872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.959907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.960036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.960067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.960170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.960202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.960309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.960340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.960513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.960547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.960728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.960758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.960881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.960912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.961085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.961121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.961249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.961281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.961388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.961418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.961541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.961585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.961811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.961860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.962076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.962116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.962253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.962298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.962432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.962465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.962666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.962698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.962879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.962914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.963038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.963081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.963198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.963229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.963349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.963382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.963487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.963522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.963724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.963773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.963940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.963979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.964175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.964219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.964459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.964503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.499 [2024-12-13 12:42:10.964675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.499 [2024-12-13 12:42:10.964707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.499 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.964923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.964958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.965074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.965109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.965223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.965254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.965435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.965467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.965612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.965657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.965857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.965902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.966170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.966214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.966363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.966399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.966644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.966676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.966857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.966893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.967016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.967049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.967223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.967441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.967483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.967652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.967684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.967878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.967910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.968038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.968081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.968290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.968331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.968537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.968577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.968724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.968758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.968891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.968928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.969102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.969131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.969235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.969265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.969495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.969529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.969647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.969679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.969803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.970069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.970283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.970319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.970598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.970633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.970756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.970802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.970976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.971007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.971172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.971201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.971380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.971412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.971528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.971557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.971662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.971690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.971864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.971902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.972134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.972174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.972302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.972338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.972618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.972652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.500 [2024-12-13 12:42:10.972840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.500 [2024-12-13 12:42:10.972905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.500 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.973016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.973043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.973158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.973188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.973315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.973606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.973633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.973755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.973832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.973963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.974004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.974132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.974170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.974293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.974328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.974451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.974490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.974680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.974711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.974878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.974918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.975174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.975204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.975396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.975425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.975620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.975656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.975829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.975860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.975960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.975988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.976096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.976136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.976401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.976444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.976666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.976705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.977000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.977034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.977225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.977255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.977370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.977398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.977591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.977620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.977738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.977767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.977891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.977919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.978103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.978132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.978381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.978411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.978539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.978568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.978693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.978722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.978905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.978937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.979058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.979087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.979287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.979315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.979503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.979532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.979702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.979731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.979854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.979884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.980001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.980031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.980152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.980181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.980350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.980379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.980549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.501 [2024-12-13 12:42:10.980579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.501 qpair failed and we were unable to recover it. 00:36:43.501 [2024-12-13 12:42:10.980801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.980839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.981008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.981036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.981158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.981187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.981294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.981323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.981446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.981476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.981664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.981692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.981849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.981882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.982063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.982092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.982280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.982310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.982507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.982539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.982665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.982696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.982884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.982915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.983037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.983066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.983233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.983261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.983384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.983413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.983590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.983620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.983735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.983763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.984901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.984931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.985187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.985216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.985385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.985414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.985580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.985610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.985864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.985896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.986078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.986107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.986229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.986385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.986415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.986540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.986568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.986757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.986799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.987067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.987095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.987261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.987290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.987492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.987522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.987829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.987861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.987980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.502 [2024-12-13 12:42:10.988009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.502 qpair failed and we were unable to recover it. 00:36:43.502 [2024-12-13 12:42:10.988124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.988153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.988333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.988368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.988536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.988565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.988680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.988722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.988866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.988909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.989101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.989134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.989319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.989357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.989468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.989510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.989709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.989758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.989923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.989971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.990103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.990166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.990301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.990344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.990548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.990582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.990691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.990723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.990858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.990892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.991199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.991232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.991464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.991587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.991622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.991868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.991904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.992033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.992077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.992301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.992349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.992485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.992525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.992725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.992769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.992990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.993025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.993140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.993172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.993283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.993314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.993502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.993539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.993657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.993690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.993835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.993869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.993996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.994038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.994173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.994217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.994351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.994391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.994545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.994588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.994800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.994840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.994948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.994981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.503 [2024-12-13 12:42:10.995108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.503 [2024-12-13 12:42:10.995138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.503 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.995256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.995287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.995417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.995460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.995645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.995676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.995863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.995897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.996093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.996141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.996352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.996398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.996600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.996646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.996825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.996860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.997050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.997198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.997231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.997410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.997614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.997646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.997754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.997801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.997957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.998002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.998148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.998192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.998336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.998376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.998581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.998624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.998901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.998934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.999054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.999083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.999264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.999293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.999477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.999505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.999685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.999732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:10.999934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:10.999965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.000067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.000097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.000199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.000241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.000439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.000482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.000717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.000754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.000897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.000937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.001075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.001210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.001239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.001505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.001534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.001643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.001681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.001871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.001902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.002035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.002064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.002274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.002321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.002513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.002547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.002748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.002803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.002921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.504 [2024-12-13 12:42:11.002950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.504 qpair failed and we were unable to recover it. 00:36:43.504 [2024-12-13 12:42:11.003063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.003092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.003374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.003409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.003584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.003614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.003801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.003831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.004005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.004049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.004265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.004302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.004571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.004606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.004802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.004840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.005033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.005065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.005334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.005366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.005539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.005568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.005684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.005714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.005931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.005976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.006192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.006229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.006403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.006543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.006575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.006831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.006863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.006967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.006995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.007161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.007189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.007420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.007455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.007715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.007747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.007889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.007920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.008110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.008151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.008344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.008386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.008610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.008645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.008842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.008888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.009062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.009092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.009282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.009314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.009484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.009516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.009706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.009740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.009890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.009924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.010047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.010322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.010366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.010567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.010610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.010833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.010872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.011079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.011111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.011282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.011315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.011502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.011544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.011804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.011839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.505 qpair failed and we were unable to recover it. 00:36:43.505 [2024-12-13 12:42:11.012032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.505 [2024-12-13 12:42:11.012079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.012289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.012330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.012511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.012556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.012750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.012813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.012940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.012972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.013146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.013178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.013453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.013489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.013616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.013648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.013905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.013960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.014180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.014230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.014534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.014582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.014709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.014744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.015026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.015060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.015198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.015230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.015339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.015372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.015591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.015626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.015892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.015930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.016218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.016291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.016503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.016553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.016763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.016815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.017022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.017071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.017285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.017317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.017594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.017627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.017932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.017970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.018092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.018124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.018411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.018460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.018629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.018669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.018870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.018918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.019114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.019147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.019340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.019374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.019620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.019656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.019855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.019891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.020136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.020184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.020370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.020700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.020917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.020952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.021075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.021107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.021281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.021312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.021587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.021622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.506 [2024-12-13 12:42:11.021893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.506 [2024-12-13 12:42:11.021940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.506 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.022094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.022140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.022283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.022322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.022522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.022565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.022814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.022851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.023123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.023156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.023355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.023386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.023583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.023801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.023838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.023947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.023986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.024228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.024274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.024545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.024592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.024742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.024796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.024924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.024968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.025285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.025355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.025582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.025629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.025835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.025881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.026086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.026126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.026285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.026332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.026554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.026593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.026735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.026797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.027017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.027056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.027350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.027389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.027676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.027716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.027941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.027983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.028207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.028246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.028447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.028488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.028692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.028731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.028884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.028930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.029224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.029263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.029473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.029512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.029754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.029812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.029953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.029998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.030270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.030310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.030617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.030656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.030864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.030907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.031170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.031241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.031536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.031573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.031877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.031911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.032153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.507 [2024-12-13 12:42:11.032186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.507 qpair failed and we were unable to recover it. 00:36:43.507 [2024-12-13 12:42:11.032311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.032344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.032475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.032507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.032699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.032732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.032930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.032963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.033149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.033181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.033439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.033471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.033643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.033675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.033919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.034224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.034257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.034372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.034414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.034687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.034719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.034908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.034941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.035119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.035151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.035256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.035288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.035546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.035578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.035711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.035743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.035947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.035981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.036162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.036194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.036357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.036540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.036572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.036750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.036792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.037035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.037067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.037176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.037209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.037467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.037499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.037671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.037704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.037886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.037923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.038046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.038078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.038321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.038354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.038551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.038584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.038823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.038857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.039101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.039134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.039305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.039338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.039556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.039589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.039763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.039805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.039920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.039951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.508 qpair failed and we were unable to recover it. 00:36:43.508 [2024-12-13 12:42:11.040125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.508 [2024-12-13 12:42:11.040156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.040261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.040298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.040432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.040464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.040666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.040696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.040819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.040851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.041025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.041058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.041185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.041216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.041421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.041452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.041562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.041594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.041706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.041736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.041959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.041995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.042105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.042243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.042273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.042407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.042438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.042557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.042589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.042837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.042872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.042998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.043029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.043134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.043163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.043280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.043310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.043450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.043481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.043715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.043836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.043870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.044060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.044297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.044327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.044460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.044491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.044685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.044717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.044888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.044924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.045098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.045129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.045337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.045368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.045577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.045608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.045844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.045877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.046055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.046086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.046194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.046227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.046408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.046441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.046719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.046751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.046880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.047083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.047113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.047217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.047248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.047417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.047448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.509 [2024-12-13 12:42:11.047753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.509 [2024-12-13 12:42:11.047796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.509 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.047969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.048001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.048173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.048210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.048335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.048366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.048476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.048509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.048747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.048779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.048987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.049020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.049258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.049290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.049420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.049450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.049621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.049652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.049840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.049875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.050046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.050268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.050298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.050485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.050517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.050718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.050749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.050873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.050906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.051050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.051080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.051201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.051232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.051498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.051530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.051720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.051752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.051953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.051985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.052226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.052443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.052474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.052580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.052610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.052932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.053174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.053206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.053379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.053411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.053673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.053705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.053831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.053865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.054001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.054032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.054216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.054247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.054500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.054531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.054708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.054739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.054854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.054886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.055079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.055111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.055300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.055330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.055590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.055622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.055816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.055848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.056076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.056107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.056295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.510 [2024-12-13 12:42:11.056326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.510 qpair failed and we were unable to recover it. 00:36:43.510 [2024-12-13 12:42:11.056460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.056493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.056667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.056699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.056903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.056946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.057080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.057111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.057246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.057277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.057392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.057425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.057543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.057573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.057709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.057741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.057952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.057986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.058089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.058119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.058228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.058259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.058363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.058394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.058591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.058621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.058817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.058851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.059026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.059058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.059272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.059304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.059517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.059550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.059673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.059704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.059834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.059865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.060037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.060069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.060192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.060222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.060398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.060436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.060616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.060647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.060765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.060815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.060944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.060975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.061170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.061201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.061310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.061343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.061458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.061488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.061599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.061629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.061739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.061771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.061964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.061998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.062191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.062222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.062334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.062365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.062481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.062511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.062692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.062724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.062856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.062891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.063077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.063110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.063379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.063411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.063527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.511 [2024-12-13 12:42:11.063558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.511 qpair failed and we were unable to recover it. 00:36:43.511 [2024-12-13 12:42:11.063765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.063821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.063927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.063958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.064132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.064165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.064281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.064317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.064421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.064453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.064586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.064616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.064803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.064835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.064943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.064974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.065085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.065115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.065360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.065392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.065577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.065608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.065850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.065884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.065990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.066021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.066195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.066227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.066338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.066370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.066498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.066644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.066675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.066811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.066850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.067034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.067067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.067247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.067279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.067476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.067507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.067744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.067776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.067974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.068006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.068205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.068236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.068351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.068381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.068501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.068532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.068649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.068679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.068852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.068885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.069053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.069085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.069269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.069301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.069480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.069512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.069692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.069723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.070019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.070051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.070225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.070255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.070435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.070465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.512 [2024-12-13 12:42:11.070567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.512 [2024-12-13 12:42:11.070598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.512 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.070841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.070877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.071118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.071150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.071266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.071298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.071526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.071558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.071672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.071703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.071877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.071911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.072100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.072372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.072410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.072652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.072684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.072795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.072827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.072997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.073030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.073279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.073310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.073431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.073462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.073667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.073800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.073833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.074006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.074037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.074162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.074194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.074378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.074410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.074597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.074628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.074898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.074933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.075119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.075152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.075279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.075311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.075515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.075547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.075696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.075728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.075917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.075950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.076128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.076349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.076382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.076492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.076524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.076645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.076677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.076895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.076928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.077035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.077066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.077188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.077219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.077390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.077422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.077532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.077564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.077763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.077805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.078000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.078032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.078155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.078357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.078390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.513 [2024-12-13 12:42:11.078631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.513 [2024-12-13 12:42:11.078663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.513 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.078793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.078837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.078951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.078986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.079096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.079128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.079252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.079284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.079524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.079556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.079774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.079834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.079941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.079972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.080168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.080203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.080339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.080377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.080493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.080524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.080699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.080732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.080934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.080968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.081163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.081195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.081313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.081344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.081459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.081491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.081620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.081651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.081754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.081796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.081915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.081948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.082134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.082166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.082289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.082321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.082437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.082469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.082648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.082679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.082809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.082850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.082969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.083109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.083141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.083385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.083416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.083653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.083685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.083805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.083846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.084020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.084052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.084222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.084254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.084440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.084471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.084596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.084626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.084825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.084858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.084959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.084991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.085129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.085159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.085374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.085405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.085592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.085624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.085731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.085761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.514 [2024-12-13 12:42:11.086009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.514 [2024-12-13 12:42:11.086042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.514 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.086244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.086275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.086447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.086657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.086688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.086807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.086839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.087011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.087042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.087147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.087176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.087277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.087425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.087455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.087623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.087653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.087766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.087822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.088008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.088040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.088241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.088271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.088380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.088410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.088548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.088656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.088884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.088916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.089088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.089120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.089317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.089349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.089454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.089484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.089724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.089755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.089953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.089985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.090088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.090119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.090297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.090327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.090468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.090498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.090749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.090791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.091046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.091077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.091249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.091280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.091396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.091426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.091546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.091576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.091756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.091802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.091923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.091955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.092125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.092155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.092327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.092357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.092477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.092508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.092610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.092640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.092746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.092776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.092942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.092979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.093086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.093117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.515 [2024-12-13 12:42:11.093353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.515 [2024-12-13 12:42:11.093384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.515 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.093666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.093815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.093848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.094021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.094053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.094231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.094260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.094508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.094538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.094646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.094677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.094915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.094947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.095068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.095100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.095223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.095254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.095430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.095460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.095632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.095668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.095867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.095901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.096029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.096061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.096169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.096200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.096295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.096327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.096457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.096487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.096590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.096620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.096805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.097077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.097109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.097285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.097316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.097416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.097448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.097552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.097583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.097714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.097746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.097934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.097967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.098240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.098272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.098444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.098476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.098653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.098684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.098873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.098905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.099040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.099070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.099356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.099388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.099509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.099540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.099716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.099747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.099871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.099912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.100097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.100130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.516 [2024-12-13 12:42:11.100300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.516 [2024-12-13 12:42:11.100330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.516 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.100564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.100596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.100702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.100733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.100898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.100930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.101062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.101095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.101266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.101297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.101533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.101564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.101690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.101720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.101891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.101924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.102033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.102064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.102197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.102226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.102343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.102374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.102559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.102590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.102715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.102745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.102939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.102979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.103166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.103196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.103305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.103343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.103526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.103557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.103739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.103769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.103988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.104021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.104203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.104234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.104403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.104605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.104636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.104839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.105148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.105179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.105314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.105344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.105534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.105564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.105769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.105812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.106071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.106102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.106233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.106264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.106438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.106469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.106653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.106684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.106859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.106891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.107069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.107099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.107305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.107336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.107462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.107493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.107733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.107764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.107900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.107934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.108193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.108223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.517 [2024-12-13 12:42:11.108325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.517 [2024-12-13 12:42:11.108355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.517 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.108470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.108500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.108800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.108834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.109055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.109248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.109280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.109457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.109488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.109655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.109685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.109879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.109910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.110169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.110199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.110437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.110468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.110651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.110681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.110864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.110896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.111030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.111062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.111164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.111195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.111368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.111399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.111639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.111671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.111805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.111845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.111959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.111996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.112101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.112131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.112397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.112429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.112637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.112667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.112840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.112872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.113061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.113092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.113290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.113400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.113430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.113604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.113635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.113810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.113842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.113951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.113982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.114235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.114266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.114447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.114591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.114623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.114825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.114856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.114978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.115008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.115217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.115247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.115349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.115380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.115553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.115583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.115719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.115749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.115978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.116015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.116150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.116182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.116301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.518 [2024-12-13 12:42:11.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.518 qpair failed and we were unable to recover it. 00:36:43.518 [2024-12-13 12:42:11.116508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.116539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.116710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.116741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.116892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.117111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.117142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.117344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.117376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.117551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.117583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.117719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.117749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.117952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.118159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.118190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.118370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.118403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.118515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.118546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.118721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.118752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.119030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.119063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.119265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.119295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.119415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.119447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.119639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.119670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.119929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.119962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.120224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.120261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.120449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.120480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.120616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.120647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.120870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.120907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.121169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.121201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.121391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.121424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.121602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.121634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.121752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.121811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.122054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.122086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.122323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.122355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.122481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.122512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.122685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.122716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.122901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.122934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.123123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.123154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.123335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.123367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.123564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.123595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.123727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.123758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.124011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.124043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.124169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.124201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.124401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.124432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.124690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.124722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.124860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.519 [2024-12-13 12:42:11.124895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.519 qpair failed and we were unable to recover it. 00:36:43.519 [2024-12-13 12:42:11.125097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.125129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.125365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.125397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.125519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.125551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.125816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.125848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.126030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.126061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.126334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.126366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.126550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.126581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.126778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.126818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.126934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.126966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.127136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.127167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.127338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.127370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.127630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.127662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.127844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.127876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.128003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.128034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.128219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.128251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.128510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.128543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.128725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.128756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.128956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.128990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.129124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.129162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.129263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.129294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.129491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.129522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.129808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.129842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.130023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.130055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.130290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.130321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.130533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.130565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.130779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.130821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.130932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.130963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.131152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.131184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.131448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.131479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.131648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.131680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.131801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.131834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.131955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.131985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.132190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.132222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.132475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.132506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.132767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.132820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.133020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.133052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.133268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.133380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.133411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.133650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.520 [2024-12-13 12:42:11.133681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.520 qpair failed and we were unable to recover it. 00:36:43.520 [2024-12-13 12:42:11.133869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.133901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.134018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.134049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.134246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.134277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.134445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.134476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.134668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.134698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.134820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.134853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.135027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.135059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.135362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.135393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.135512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.135542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.135719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.135748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.135927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.135959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.136198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.136229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.136495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.136526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.136770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.136821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.137075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.137107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.137222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.137253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.137432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.137463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.137710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.137741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.137940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.137973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.138152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.138188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.138302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.138332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.138502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.138637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.138668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.138927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.138959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.139062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.139092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.139329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.139361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.139634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.139664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.139853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.139885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.140014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.140053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.140169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.140199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.140370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.140401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.140676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.140706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.140920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.140956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.141152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.141185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.521 qpair failed and we were unable to recover it. 00:36:43.521 [2024-12-13 12:42:11.141371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.521 [2024-12-13 12:42:11.141402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.141664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.141696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.141866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.141898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.142017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.142047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.142174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.142206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.142331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.142361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.142533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.142565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.142670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.142701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.142882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.142915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.143089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.143120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.143232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.143262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.143474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.143505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.143628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.143665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.143852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.143885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.144069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.144100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.144226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.144256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.144494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.144526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.144763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.144811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.144947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.144979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.145217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.145248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.145442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.145474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.145769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.145820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.145930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.145961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.146127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.146157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.146397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.146427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.146658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.146688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.146861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.146894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.147132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.147162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.147337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.147367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.147553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.147584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.147830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.147863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.147987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.148018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.148204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.148235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.148496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.148528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.148660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.148690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.148873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.148909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.149095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.149127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.149319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.149350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.149519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.522 [2024-12-13 12:42:11.149549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.522 qpair failed and we were unable to recover it. 00:36:43.522 [2024-12-13 12:42:11.149731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.149762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.149984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.150156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.150188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.150414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.150446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.150557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.150589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.150772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.150814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.151008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.151038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.151164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.151195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.151410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.151442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.151622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.151655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.151917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.151950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.152074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.152104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.152288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.152318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.152490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.152526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.152714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.152746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.152934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.152970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.153154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.153186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.153378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.153410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.153595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.153626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.153833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.153865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.153992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.154022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.154223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.154255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.154443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.154474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.154676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.154707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.154880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.523 [2024-12-13 12:42:11.155167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.523 [2024-12-13 12:42:11.155198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.523 qpair failed and we were unable to recover it. 00:36:43.807 [2024-12-13 12:42:11.155320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.807 [2024-12-13 12:42:11.155351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.807 qpair failed and we were unable to recover it. 00:36:43.807 [2024-12-13 12:42:11.155622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.807 [2024-12-13 12:42:11.155653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.807 qpair failed and we were unable to recover it. 00:36:43.807 [2024-12-13 12:42:11.155827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.807 [2024-12-13 12:42:11.155859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.807 qpair failed and we were unable to recover it. 00:36:43.807 [2024-12-13 12:42:11.155971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.807 [2024-12-13 12:42:11.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.807 qpair failed and we were unable to recover it. 00:36:43.807 [2024-12-13 12:42:11.156179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.807 [2024-12-13 12:42:11.156209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.807 qpair failed and we were unable to recover it. 00:36:43.807 [2024-12-13 12:42:11.156382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.156411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.156523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.156554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.156664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.156694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.156881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.156916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.157107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.157139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.157284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.157528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.157560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.157679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.157712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.157844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.157877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.157991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.158023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.158225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.158255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.158364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.158394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.158574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.158605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.158816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.158849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.158971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.159002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.159194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.159225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.159411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.159443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.159617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.159649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.159752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.159793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.160006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.160036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.160230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.160261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.160377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.160409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.160524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.160561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.160741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.160773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.161061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.161095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.161199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.161230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.161466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.161497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.161685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.161716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.161886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.161918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.162088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.162118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.162293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.162324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.162531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.162770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.162811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.163051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.163082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.163210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.163241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.163374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.163404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.163623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.808 qpair failed and we were unable to recover it. 00:36:43.808 [2024-12-13 12:42:11.163808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.808 [2024-12-13 12:42:11.163840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.163958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.163989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.164180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.164211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.164337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.164367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.164629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.164662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.164841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.164875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.165115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.165147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.165247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.165277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.165447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.165477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.165675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.165707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.165875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.165909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.166079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.166110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.166305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.166337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.166526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.166556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.166799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.166831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.167093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.167125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.167308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.167339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.167460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.167491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.167796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.167829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.167949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.167982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.168167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.168199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.168459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.168490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.168697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.168728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.168912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.168948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.169187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.169219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.169519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.169557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.169753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.169813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.169933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.169965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.170161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.170193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.170382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.170414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.170607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.170637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.170876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.170908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.171085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.171116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.171295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.171325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.171499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.171531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.171699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.171730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.171947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.171979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.172189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.172220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.172323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.172352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.809 [2024-12-13 12:42:11.172531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.809 [2024-12-13 12:42:11.172563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.809 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.172772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.172823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.173003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.173035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.173233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.173264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.173505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.173537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.173743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.173774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.173976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.174007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.174176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.174206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.174422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.174454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.174639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.174670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.174971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.175004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.175273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.175304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.175543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.175574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.175827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.175859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.175971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.176002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.176117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.176146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.176328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.176358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.176594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.176624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.176836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.176872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.177057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.177090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.177266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.177297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.177420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.177451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.177575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.177605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.177728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.177760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.177962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.177995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.178235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.178267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.178506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.178544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.178737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.178769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.178921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.178952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.179129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.179162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.179295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.179326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.179521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.179552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.179737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.179768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.179983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.180013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.180190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.180220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.180322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.180351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.180615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.180646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.180794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.180837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.181039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.810 [2024-12-13 12:42:11.181072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.810 qpair failed and we were unable to recover it. 00:36:43.810 [2024-12-13 12:42:11.181308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.181340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.181532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.181563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.181828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.181862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.182130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.182242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.182273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.182469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.182500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.182680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.182711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.182826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.182857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.183049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.183079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.183267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.183298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.183480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.183511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.183704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.183736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.183952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.183984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.184171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.184202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.184401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.184433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.184551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.184582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.184794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.184838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.185087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.185118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.185292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.185323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.185444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.185473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.185680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.185710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.185831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.185864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.186050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.186080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.186338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.186369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.186504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.186533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.186717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.186747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.186931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.186964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.187143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.187180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.187305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.187335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.187507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.187538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.187719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.187750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.187945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.187978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.188147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.188178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.188295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.188324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.188437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.188467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.188643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.188675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.188882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.188919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.189109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.189141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.189380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.811 [2024-12-13 12:42:11.189412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.811 qpair failed and we were unable to recover it. 00:36:43.811 [2024-12-13 12:42:11.189579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.189610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.189868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.189901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.190042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.190073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.190181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.190213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.190402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.190435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.190619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.190650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.190829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.190861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.191039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.191071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.191247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.191278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.191486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.191517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.191705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.191738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.191865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.191896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.192113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.192145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.192315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.192346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.192516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.192547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.192798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.192841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.193019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.193049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.193234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.193266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.193471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.193503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.193627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.193658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.193776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.193833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.194096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.194128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.194300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.194331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.194500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.194531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.194769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.194811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.195059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.195090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.195334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.195365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.195554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.195585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.195800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.195838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.196050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.196242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.196273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.196382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.812 [2024-12-13 12:42:11.196412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.812 qpair failed and we were unable to recover it. 00:36:43.812 [2024-12-13 12:42:11.196621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.196652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.196921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.196958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.197065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.197096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.197303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.197335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.197466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.197497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.197704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.197735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.197933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.197966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.198152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.198184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.198418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.198450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.198568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.198600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.198734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.198765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.198948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.198981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.199220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.199252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.199425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.199457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.199637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.199669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.199926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.199959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.200073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.200104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.200384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.200416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.200658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.200690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.200813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.200849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.201025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.201058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.201231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.201261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.201524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.201554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.201752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.201801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.201972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.202002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.202182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.202214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.202318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.202349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.202473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.202503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.202637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.202668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.202860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.202892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.203074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.203104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.203299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.203399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.203430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.203601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.203631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.203805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.203836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.204035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.204065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.204243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.204281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.204467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.204498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.204668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.813 [2024-12-13 12:42:11.204699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.813 qpair failed and we were unable to recover it. 00:36:43.813 [2024-12-13 12:42:11.204810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.204850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.205047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.205079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.205250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.205282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.205488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.205519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.205621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.205651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.205850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.205882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.205986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.206016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.206282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.206313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.206501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.206532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.206769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.206810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.206927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.206957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.207145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.207176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.207357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.207388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.207652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.207683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.207812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.207844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.208113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.208143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.208323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.208354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.208472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.208502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.208674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.208703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.208896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.209172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.209204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.209352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.209569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.209601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.209727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.209757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.209954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.209987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.210158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.210376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.210408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.210672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.210704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.210882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.210915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.211046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.211077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.211191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.211222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.211407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.211438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.211568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.211600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.211862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.211894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.212088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.212119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.212224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.212255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.212386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.212418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.212620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.212657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.212826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.814 [2024-12-13 12:42:11.212860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.814 qpair failed and we were unable to recover it. 00:36:43.814 [2024-12-13 12:42:11.213111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.213142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.213314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.213345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.213513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.213545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.213740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.213772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.214069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.214101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.214219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.214250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.214359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.214391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.214512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.214544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.214661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.214691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.214929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.214960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.215094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.215125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.215361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.215393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.215687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.215720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.215929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.215962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.216167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.216199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.216439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.216470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.216604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.216635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.216901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.216936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.217113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.217145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.217338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.217370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.217475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.217505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.217680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.217712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.218005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.218267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.218299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.218567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.218598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.218848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.218881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.219146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.219177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.219345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.219376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.219561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.219592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.219843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.219875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.220058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.220090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.220214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.220245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.220433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.220464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.220581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.220613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.220827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.220862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.221001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.221032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.221202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.221232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.221487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.221519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.221754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.815 [2024-12-13 12:42:11.221811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.815 qpair failed and we were unable to recover it. 00:36:43.815 [2024-12-13 12:42:11.221985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.222017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.222278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.222310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.222597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.222628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.222817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.222858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.223039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.223071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.223255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.223287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.223466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.223497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.223670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.223700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.223818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.223851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.224113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.224145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.224355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.224386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.224508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.224539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.224669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.224699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.224887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.224923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.225057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.225089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.225343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.225375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.225501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.225532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.225719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.225750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.225949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.225981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.226219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.226251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.226370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.226402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.226587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.226618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.226879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.227080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.227317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.227348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.227624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.227655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.227808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.227841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.227965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.227996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.228277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.228309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.228592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.228836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.228872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.228996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.229028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.229159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.229189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.229360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.229391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.816 qpair failed and we were unable to recover it. 00:36:43.816 [2024-12-13 12:42:11.229566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.816 [2024-12-13 12:42:11.229597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.229778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.229837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.230028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.230060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.230320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.230352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.230544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.230576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.230689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.230726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.230919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.230951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.231162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.231192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.231360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.231392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.231595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.231627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.231901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.231934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.232176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.232207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.232445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.232476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.232589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.232620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.232809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.232849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.233091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.233121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.233225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.233255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.233517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.233549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.233827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.233862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.234060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.234093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.234275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.234307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.234575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.234703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.234734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.234948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.234981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.235117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.235147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.235334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.235365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.235553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.235583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.235758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.235797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.235922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.235953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.236072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.236103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.236225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.236254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.236435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.236467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.236593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.236624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.236826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.236861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.237077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.237108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.237227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.237258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.237384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.237415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.237606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.237638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.237823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.237856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.817 [2024-12-13 12:42:11.237969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.817 [2024-12-13 12:42:11.237999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.817 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.238112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.238144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.238327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.238359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.238531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.238561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.238756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.238965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.238996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.239184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.239222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.239479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.239511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.239624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.239655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.239827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.239859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.240058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.240089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.240211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.240246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.240366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.240397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.240633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.240665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.240799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.240841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.241102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.241135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.241320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.241352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.241535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.241566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.241695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.241727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.241943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.241976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.242163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.242194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.242372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.242404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.242643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.242676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.242910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.242944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.243067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.243099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.243284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.243315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.243508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.243538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.243710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.243740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.243922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.243955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.244128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.244160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.244280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.244310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.244505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.244536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.244717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.244747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.245014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.245051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.245173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.245204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.245324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.245354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.245644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.245676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.245869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.245902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.246024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.246055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.818 [2024-12-13 12:42:11.246236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.818 [2024-12-13 12:42:11.246269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.818 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.246440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.246471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.246735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.246766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.246966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.246998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.247270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.247300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.247473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.247505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.247678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.247708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.247972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.248010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.248132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.248163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.248424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.248455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.248636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.248666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.248908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.248942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.249162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.249193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.249389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.249420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.249544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.249574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.249835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.250075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.250106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.250224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.250255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.250449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.250481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.250593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.250625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.250897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.251099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.251131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.251267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.251380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.251411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.251517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.251548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.251669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.251700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.251812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.251846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.252039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.252071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.252264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.252295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.252593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.252624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.252890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.252927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.253127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.253159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.253362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.253394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.253576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.253608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.253824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.253864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.254001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.254033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.254224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.254256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.254427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.254459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.254708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.254741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.254930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.819 [2024-12-13 12:42:11.254965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.819 qpair failed and we were unable to recover it. 00:36:43.819 [2024-12-13 12:42:11.255135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.255166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.255401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.255432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.255605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.255637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.255879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.255912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.256210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.256243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.256366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.256398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.256525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.256556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.256826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.256862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.257119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.257151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.257323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.257355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.257462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.257494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.257703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.257735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.257915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.257946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.258058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.258088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.258269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.258300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.258432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.258463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.258566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.258595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.258798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.258832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.259011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.259043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.259216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.259247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.259369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.259399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.259590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.259622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.259868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.259902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.260085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.260117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.260221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.260252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.260440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.260472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.260714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.260745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.261023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.261059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.261245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.261277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.261524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.261555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.261737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.261768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.261909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.261942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.262048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.262077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.262316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.262347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.262529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.262571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.262808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.262841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.262960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.262991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.263121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.263151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.263364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.820 [2024-12-13 12:42:11.263395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.820 qpair failed and we were unable to recover it. 00:36:43.820 [2024-12-13 12:42:11.263583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.263614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.263799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.263832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.263954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.264105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.264136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.264245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.264276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.264451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.264482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.264656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.264686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.264846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.265033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.265065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.265248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.265281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.265461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.265492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.265669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.265700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.265884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.265918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.266098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.266131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.266318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.266349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.266519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.266551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.266722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.266754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.266932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.266964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.267215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.267247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.267447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.267477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.267718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.267751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.268027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.268058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.268241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.268273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.268454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.268485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.268681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.268711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.268893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.268927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.269144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.269312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.269344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.269465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.269497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.269666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.269697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.269820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.269855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.270029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.270060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.270300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.821 [2024-12-13 12:42:11.270333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.821 qpair failed and we were unable to recover it. 00:36:43.821 [2024-12-13 12:42:11.270595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.270627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.270814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.270847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.270956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.270993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.271167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.271199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.271321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.271353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.271545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.271576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.271750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.271790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.272008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.272041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.272283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.272315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.272446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.272478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.272599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.272631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.272736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.272768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.272923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.273143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.273175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.273290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.273321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.273515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.273549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.273725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.273757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.273945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.274121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.274151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.274331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.274362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.274599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.274631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.274810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.274842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.275023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.275055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.275249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.275280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.275394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.275425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.275606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.275637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.275831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.275865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.276112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.276143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.276328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.276359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.276546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.276578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.276791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.276955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.276986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.277168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.277198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.277367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.277398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.277513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.277543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.277824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.277972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.278001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.278190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.278220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.278401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.822 [2024-12-13 12:42:11.278432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.822 qpair failed and we were unable to recover it. 00:36:43.822 [2024-12-13 12:42:11.278614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.278647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.278848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.278881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.279114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.279239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.279277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.279451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.279482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.279667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.279700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.279871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.279904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.280198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.280230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.280402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.280434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.280622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.280653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.280877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.281005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.281036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.281142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.281175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.281435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.281467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.281576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.281607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.281710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.281743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.281960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.281993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.282124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.282156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.282365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.282397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.282503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.282535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.282722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.282754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.283026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.283058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.283173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.283204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.283389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.283724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.283756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.284022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.284053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.284178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.284208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.284377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.284407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.284588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.284618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.284808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.284847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.285054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.285088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.285263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.285294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.285405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.285436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.285562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.285591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.285763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.285819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.286094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.286126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.286316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.286347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.286468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.286496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.286601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.823 [2024-12-13 12:42:11.286632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.823 qpair failed and we were unable to recover it. 00:36:43.823 [2024-12-13 12:42:11.286764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.286808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.287022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.287055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.287243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.287272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.287380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.287410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.287526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.287563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.287688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.287718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.287892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.287925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.288033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.288063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.288186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.288216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.288429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.288461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.288700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.288732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.288924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.288958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.289064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.289095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.289351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.289384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.289636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.289667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.289813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.289847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.289966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.289997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.290188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.290220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.290336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.290367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.290614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.290645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.290886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.290920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.291033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.291063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.291302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.291334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.291505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.291536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.291646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.291677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.291868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.291900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.292010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.292041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.292249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.292281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.292538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.292569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.292749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.292793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.292919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.292958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.293142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.293172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.293353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.293383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.293488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.293520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.293696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.293726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.293952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.293986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.824 qpair failed and we were unable to recover it. 00:36:43.824 [2024-12-13 12:42:11.294095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.824 [2024-12-13 12:42:11.294125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.294294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.294325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.294512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.294543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.294790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.294823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.294943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.294975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.295151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.295181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.295318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.295349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.295573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.295604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.295802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.295841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.296020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.296051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.296231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.296264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.296451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.296481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.296667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.296698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.296958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.296994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.297167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.297199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.297306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.297338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.297523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.297560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.297689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.297720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.297844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.297878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.297982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.298014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.298189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.298220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.298401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.298436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.298615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.298646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.298886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.298919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.299135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.299276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.299408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.299542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.299706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.299847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.299974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.300005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.300189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.825 [2024-12-13 12:42:11.300401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.825 [2024-12-13 12:42:11.300434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.825 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.300597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.300776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.300832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.301088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.301120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.301232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.301263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.301373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.301403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.301588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.301618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.301819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.301852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.302121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.302152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.302390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.302571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.302602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.302714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.302746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.302944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.302976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.303089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.303120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.303225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.303255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.303376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.303406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.303590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.303633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.303743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.303776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.304026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.304058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.304229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.304259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.304495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.304525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.304713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.304744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.304882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.304919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.305033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.305064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.305265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.305295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.305474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.305507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.305753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.305795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.305911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.305942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.306060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.306090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.306210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.306241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.306427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.306464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.306712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.306743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.306965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.307002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.307140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.307170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.307442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.307475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.307675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.307706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.307815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.307847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.308025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.308056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.308314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.826 [2024-12-13 12:42:11.308345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.826 qpair failed and we were unable to recover it. 00:36:43.826 [2024-12-13 12:42:11.308482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.308514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.308697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.308728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.308910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.308950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.309134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.309164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.309369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.309399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.309638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.309670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.309880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.309912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.310156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.310393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.310426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.310549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.310580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.310779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.310834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.311013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.311044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.311179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.311209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.311384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.311414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.311653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.311684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.311934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.311968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.312092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.312128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.312322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.312359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.312620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.312651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.312777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.312818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.313082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.313113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.313302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.313332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.313558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.313595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.313724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.313754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.313936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.313968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.314181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.314356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.314387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.314613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.314645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.314857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.314891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.315067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.315099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.315224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.315255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.315383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.315416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.315601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.315631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.315815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.315848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.315973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.316004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.316119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.316150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.316338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.316369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.316556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.316588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.316777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.827 [2024-12-13 12:42:11.316836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.827 qpair failed and we were unable to recover it. 00:36:43.827 [2024-12-13 12:42:11.317072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.317103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.317289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.317321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.317457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.317488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.317596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.317626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.317822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.318039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.318071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.318331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.318362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.318481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.318513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.318714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.318745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.318877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.318911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.319090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.319122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.319395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.319425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.319610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.319641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.319762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.319825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.320008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.320039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.320153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.320185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.320300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.320332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.320512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.320543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.320665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.320702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.320875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.320907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.321116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.321283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.321313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.321483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.321513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.321696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.321729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.321847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.321878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.322051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.322082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.322263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.322293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.322466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.322497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.322622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.322653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.322767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.322814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.323005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.323036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.323274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.323306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.323424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.323456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.323624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.323656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.323917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.323949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.324187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.324217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.324333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.324365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.324537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.324568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.324819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.324852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.828 qpair failed and we were unable to recover it. 00:36:43.828 [2024-12-13 12:42:11.324962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.828 [2024-12-13 12:42:11.324993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.325267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.325298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.325429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.325459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.325635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.325666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.325799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.325830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.326073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.326106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.326234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.326266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.326456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.326486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.326610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.326641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.326777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.326969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.327002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.327135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.327165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.327353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.327384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.327561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.327592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.327761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.327806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.327990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.328021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.328195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.328227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.328462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.328494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.328625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.328656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.328913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.328952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.329103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.329284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.329315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.329437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.329468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.329675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.329705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.329895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.329926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.330117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.330147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.330275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.330307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.330493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.330525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.330645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.330676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.330847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.330879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.331061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.331093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.331330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.331361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.331475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.331506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.331709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.331741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.332013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.332047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.332219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.332251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.332511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.332542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.332779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.332831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.332945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.332976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.829 qpair failed and we were unable to recover it. 00:36:43.829 [2024-12-13 12:42:11.333155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.829 [2024-12-13 12:42:11.333185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.333301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.333331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.333447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.333477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.333692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.333722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.333925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.333957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.334201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.334231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.334413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.334443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.334736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.334769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.334900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.334933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.335054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.335084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.335350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.335382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.335496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.335526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.335726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.335758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.336009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.336041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.336215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.336246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.336457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.336490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.336606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.336636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.336811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.336852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.336972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.337005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.337199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.337231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.337367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.337405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.337689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.337721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.337919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.337951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.338076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.338106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.338348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.338380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.338594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.338626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.338862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.338895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.339023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.339055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.339257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.339290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.339469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.339500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.339673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.339703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.339819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.339850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.339954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.339986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.340160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.340190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.830 qpair failed and we were unable to recover it. 00:36:43.830 [2024-12-13 12:42:11.340311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.830 [2024-12-13 12:42:11.340342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.340580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.340611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.340804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.340842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.341037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.341069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.341260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.341292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.341505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.341537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.341654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.341684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.341816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.341848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.342044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.342076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.342268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.342300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.342486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.342517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.342640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.342670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.342813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.342844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.342902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde35e0 (9): Bad file descriptor 00:36:43.831 [2024-12-13 12:42:11.343328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.343399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.343627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.343662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.343770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.344004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.344037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.344284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.344317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.344577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.344608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.344726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.344758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.344938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.345079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.345240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.345271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.345460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.345491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.345732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.345764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.345915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.346098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.346131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.346268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.346299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.346431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.346462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.346720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.346753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.346980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.347039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.347283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.347322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.347459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.347491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.347678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.347710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.347932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.348171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.348203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.348342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.348373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.348545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.348577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.831 [2024-12-13 12:42:11.348765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.831 [2024-12-13 12:42:11.348814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.831 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.348962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.349006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.349180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.349212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.349392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.349423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.349552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.349584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.349761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.349818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.349954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.349985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.350094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.350125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.350253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.350285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.350455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.350486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.350589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.350620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.350880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.350915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.351034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.351066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.351178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.351209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.351391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.351422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.351674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.351706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.351824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.351856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.351980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.352013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.352184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.352216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.352491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.352523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.352626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.352657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.352842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.352877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.353006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.353038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.353231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.353263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.353370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.353401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.353511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.353543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.353663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.353696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.353887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.353919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.354101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.354133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.354306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.354337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.354526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.354559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.354675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.354706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.354926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.354959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.355132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.355163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.355269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.355301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.355423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.355455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.355692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.355724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.355841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.355873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.355985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.356017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.832 [2024-12-13 12:42:11.356189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.832 [2024-12-13 12:42:11.356220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.832 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.356388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.356420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.356611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.356648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.356756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.356799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.356987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.357020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.357194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.357225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.357485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.357516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.357653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.357685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.357809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.357842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.358106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.358138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.358308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.358339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.358531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.358562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.358686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.358719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.359005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.359038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.359150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.359181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.359301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.359333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.359512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.359544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.359729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.359760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.359896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.359928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.360179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.360211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.360386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.360418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.360619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.360650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.360764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.360808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.360945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.360978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.361106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.361136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.361326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.361357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.361538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.361569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.361708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.361739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.362004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.362037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.362168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.362199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.362385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.362416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.362543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.362575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.362749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.362790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.362915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.362947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.363140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.363171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.363410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.363440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.363608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.363641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.363897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.363931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.364116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.364148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.364268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.833 [2024-12-13 12:42:11.364298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.833 qpair failed and we were unable to recover it. 00:36:43.833 [2024-12-13 12:42:11.364489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.364521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.364685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.364718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.364844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.364885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.365095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.365127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.365251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.365415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.365447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.365557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.365590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.365761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.365802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.365977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.366010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.366193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.366225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.366506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.366538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.366657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.366688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.366798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.366830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.367009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.367041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.367170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.367201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.367413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.367444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.367623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.367655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.367770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.367814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.368013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.368046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.368237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.368268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.368400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.368433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.368605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.368637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.368808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.368849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.369031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.369063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.369322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.369352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.369470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.369501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.369778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.369838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.370028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.370059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.370236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.370267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.370519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.370590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.370800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.370839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.371051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.371202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.371233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.371418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.371449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.834 [2024-12-13 12:42:11.371636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.834 [2024-12-13 12:42:11.371666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.834 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.371772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.371817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.371989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.372021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.372192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.372224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.372417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.372449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.372677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.372708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.372836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.372868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.373136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.373168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.373285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.373315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.373439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.373471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.373583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.373614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.373852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.373884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.373996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.374028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.374153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.374184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.374357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.374388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.374584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.374626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.374865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.374899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.375095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.375127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.375239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.375271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.375452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.375484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.375731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.375763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.375953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.375985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.376099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.376136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.376243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.376466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.376496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.376804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.376967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.376999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.377116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.377147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.377257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.377288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.377474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.377505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.377835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.377868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.378053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.378085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.378273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.378304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.378413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.378444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.378646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.378678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.378855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.378887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.379152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.379184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.379359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.379391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.379586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.379618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.379800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.835 [2024-12-13 12:42:11.379832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.835 qpair failed and we were unable to recover it. 00:36:43.835 [2024-12-13 12:42:11.380011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.380041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.380152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.380297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.380326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.380503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.380534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.380794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.380827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.380948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.380980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.381233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.381265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.381454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.381486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.381619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.381811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.381851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.381983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.382015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.382253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.382286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.382483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.382515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.382727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.382931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.382963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.383186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.383217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.383325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.383356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.383548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.383581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.383689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.383719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.383835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.383868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.384056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.384089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.384216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.384247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.384421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.384452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.384652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.384684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.384868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.384902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.385025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.385059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.385259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.385290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.385465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.385497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.385665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.385698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.385870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.385903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.386157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.386188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.386339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.386466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.386497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.386684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.386716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.386862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.386894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.387019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.387051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.387288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.387324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.387522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.387555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.387800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.387834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.387943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.836 [2024-12-13 12:42:11.387973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.836 qpair failed and we were unable to recover it. 00:36:43.836 [2024-12-13 12:42:11.388089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.388119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.388292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.388324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.388455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.388487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.388593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.388622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.388801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.388833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.389025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.389060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.389188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.389221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.389412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.389443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.389571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.389600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.389715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.389747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.389885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.389930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.390136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.390167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.390298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.390328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.390452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.390485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.390676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.390709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.390836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.390869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.391039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.391071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.391245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.391276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.391443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.391474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.391594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.391628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.391734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.391766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.392019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.392050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.392227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.392258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.392469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.392501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.392625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.392658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.392856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.392889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.393075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.393107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.393226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.393257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.393498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.393531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.393646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.393677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.393848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.393881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.394055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.394086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.394259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.394399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.394430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.394532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.394562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.394684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.394716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.394887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.394919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.395037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.395069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.395204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.395237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.837 [2024-12-13 12:42:11.395366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.837 [2024-12-13 12:42:11.395396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.837 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.395609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.395640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.395832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.395865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.396043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.396073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.396190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.396222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.396428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.396459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.396634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.396665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.396937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.396970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.397147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.397177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.397356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.397387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.397575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.397607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.397779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.397830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.398139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.398211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.398524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.398559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.398737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.398770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.398979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.399013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.399146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.399177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.399355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.399387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.399562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.399593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.399779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.399826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.399997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.400028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.400150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.400183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.400419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.400450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.400663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.400695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.400815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.400849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.401022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.401063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.401256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.401288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.401472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.401504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.401689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.401721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.401905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.401938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.402056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.402086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.402275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.402307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.402426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.402458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.402572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.402604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.402801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.402833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.402951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.402983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.403159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.403191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.403358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.403389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.403510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.403541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.838 [2024-12-13 12:42:11.403672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.838 [2024-12-13 12:42:11.403703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.838 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.403909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.404144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.404182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.404294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.404325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.404590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.404622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.404748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.404778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.404994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.405025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.405288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.405319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.405576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.405607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.405793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.405825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.406020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.406052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.406188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.406220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.406478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.406509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.406729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.406777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.406986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.407019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.407147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.407183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.407377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.407427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.407719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.407759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.407987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.408023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.408161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.408193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.408454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.408485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.408698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.408729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.408909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.408942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.409133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.409165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.409272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.409303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.409489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.409520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.409729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.409760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.410055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.410088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.410269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.410301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.410495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.410527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.410661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.410691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.410878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.410910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.411170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.411201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.411370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.411401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.411587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.839 [2024-12-13 12:42:11.411619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.839 qpair failed and we were unable to recover it. 00:36:43.839 [2024-12-13 12:42:11.411802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.411835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.411941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.411972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.412244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.412276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.412424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.412595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.412626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.412760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.412808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.412909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.412940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.413068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.413098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.413273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.413306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.413475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.413507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.413673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.413703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.413944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.413976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.414175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.414206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.414332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.414362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.414532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.414563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.414732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.414762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.414887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.414918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.415105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.415137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.415256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.415287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.415530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.415563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.415766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.415809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.415995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.416026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.416195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.416226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.416467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.416500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.416707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.416739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.416934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.416967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.417081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.417114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.417375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.417407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.417534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.417564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.417683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.417716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.417920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.417954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.418127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.418158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.418262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.418298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.418507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.418539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.418646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.418678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.418967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.419001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.419188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.419220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.419409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.419440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.419638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.419669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.840 [2024-12-13 12:42:11.419874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.840 [2024-12-13 12:42:11.419906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.840 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.420087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.420117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.420286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.420318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.420493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.420525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.420694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.420725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.420971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.421004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.421196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.421227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.421437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.421469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.421662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.421693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.421878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.421910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.422087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.422120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.422330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.422361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.422571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.422602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.422805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.422837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.422964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.422994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.423104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.423135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.423330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.423362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.423555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.423587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.423710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.423740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.423995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.424027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.424160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.424191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.424385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.424417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.424596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.424631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.424754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.424793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.425080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.425113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.425298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.425329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.425501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.425532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.425768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.425824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.426032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.426063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.426280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.426311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.426555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.426586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.426829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.426870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.427055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.427086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.427279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.427310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.427413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.427454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.427741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.427774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.427956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.427988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.428114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.428144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.428440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.428472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.428609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.428639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.841 [2024-12-13 12:42:11.428741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.841 [2024-12-13 12:42:11.428771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.841 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.429017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.429049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.429238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.429270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.429441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.429474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.429615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.429646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.429850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.429884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.430150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.430181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.430312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.430344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.430538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.430570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.430808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.430842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.431021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.431052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.431273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.431304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.431508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.431540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.431729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.431760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.431959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.431991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.432164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.432196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.432378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.432410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.432596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.432628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.432868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.432900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.433013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.433044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.433169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.433200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.433436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.433469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.433716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.433748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.433954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.434099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.434131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.434393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.434425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.434687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.434719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.434976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.435010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.435196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.435227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.435417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.435449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.435653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.435686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.435791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.435823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.436037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.436069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.436256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.436288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.436532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.436564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.436829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.436862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.436986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.437018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.437192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.437223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.437391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.437421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.437592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.842 [2024-12-13 12:42:11.437624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.842 qpair failed and we were unable to recover it. 00:36:43.842 [2024-12-13 12:42:11.437730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.437762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.437930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.438192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.438224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.438433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.438464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.438642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.438673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.438933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.438966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.439169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.439201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.439440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.439472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.439675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.439706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.439948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.439982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.440097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.440129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.440258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.440288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.440426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.440538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.440568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.440678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.440708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.440887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.440920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.441109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.441141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.441335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.441368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.441543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.441573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.441750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.441799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.441988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.442020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.442144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.442176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.442364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.442407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.442514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.442545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.442655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.442687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.442875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.442907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.443030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.443061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.443265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.443400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.443432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.443625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.443656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.443834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.443866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.444053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.444085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.444387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.444419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.444611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.444642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.444826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.843 [2024-12-13 12:42:11.444863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.843 qpair failed and we were unable to recover it. 00:36:43.843 [2024-12-13 12:42:11.445125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.445155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.445364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.445396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.445553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.445725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.445756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.445871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.445903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.446164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.446196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.446431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.446463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.446580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.446612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.446801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.446834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.447090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.447122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.447401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.447432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.447622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.447653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.447869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.447902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.448029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.448059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.448263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.448294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.448561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.448592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.448773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.448815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.448940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.448970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.449153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.449185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.449355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.449385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.449556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.449587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.449696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.449728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.449853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.449892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.450081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.450112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.450240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.450271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.450483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.450515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.450755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.450798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.450989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.451021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.451305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.451343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.451446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.451478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.451581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.451612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.451729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.451761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.452013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.452044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.452213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.452245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.452424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.452455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.452590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.452621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.452804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.452837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.453029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.453061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.453189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.453220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.844 [2024-12-13 12:42:11.453385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.844 [2024-12-13 12:42:11.453415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.844 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.453547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.453579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.453700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.453731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.453920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.453952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.454066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.454098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.454287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.454319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.454433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.454463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.454582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.454614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.454802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.454835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.455010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.455041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.455225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.455256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.455378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.455410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.455521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.455552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.455730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.455761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.456042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.456076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.456365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.456396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.456595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.456633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.456828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.456861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.457031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.457063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.457237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.457269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.457455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.457486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.457740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.457772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.457901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.457933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.458127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.458159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.458354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.458386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.458624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.458655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.458828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.458862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.459119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.459151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.459358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.459390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.459492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.459523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.459646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.459677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.459926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.459959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.460165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.460197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.460328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.460360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.460472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.460503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.460621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.460653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.460832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.460865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.461105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.461137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.461325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.461356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.461541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.845 [2024-12-13 12:42:11.461572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.845 qpair failed and we were unable to recover it. 00:36:43.845 [2024-12-13 12:42:11.461745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.461775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.461886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.461918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.462159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.462191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.462458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.462489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.462662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.462695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.462872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.462905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.463083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.463114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.463350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.463382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.463555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.463585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.463757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.463796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.463901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.463932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.464146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.464177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.464362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.464393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.464635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.464666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.464930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.464963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.465097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.465129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.465317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.465349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.465609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.465647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.465911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.465944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.466207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.466239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.466490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.466522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.466705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.466736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.466874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.466907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.467156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.467187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.467450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.467481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.467596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.467628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.467821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.467855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.468035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.468066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.468343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.468375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.468562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.468592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.468731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.468761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.468965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.468999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.469253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.469285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.469390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.469422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.469535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.469567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.469826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.469859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.470031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.470062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.470178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.470208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.470391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.470422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.470718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.470749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.470966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.470999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.471202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.471234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.846 qpair failed and we were unable to recover it. 00:36:43.846 [2024-12-13 12:42:11.471485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.846 [2024-12-13 12:42:11.471516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.471621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.471653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.471824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.471862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.471972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.472004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.472115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.472148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.472392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.472424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.472598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.472628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.472818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.472850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.473030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.473061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.473172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.473204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.473388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.473420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.473568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.473829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.473862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.474098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.474130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.474232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.474264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.474436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.474468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.474733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.474769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.474968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.475001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.475256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.475287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.475533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.475565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.475739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.475771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.475966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.475998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.476257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.476290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.476397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.476429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.476644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.476677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.476867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.476901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 541634 Killed "${NVMF_APP[@]}" "$@" 00:36:43.847 [2024-12-13 12:42:11.477082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.477115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.477430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.477462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.477676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.477708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:43.847 [2024-12-13 12:42:11.477895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.477930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:43.847 [2024-12-13 12:42:11.478182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.478215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.478331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.478363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:43.847 [2024-12-13 12:42:11.478542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.478575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.478713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.478744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.847 [2024-12-13 12:42:11.478952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.478984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.847 [2024-12-13 12:42:11.479167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.847 [2024-12-13 12:42:11.479200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.847 qpair failed and we were unable to recover it. 00:36:43.847 [2024-12-13 12:42:11.479315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.479347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.479575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.479606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.479734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.479765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.479963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.479996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.480245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.480277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.480400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.480433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.480550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.480581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.480710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.480740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.481052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.481086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.481206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.481238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.481374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.481405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.481519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.481551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.481720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.481753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.481924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.481996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:43.848 qpair failed and we were unable to recover it. 00:36:43.848 [2024-12-13 12:42:11.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.848 [2024-12-13 12:42:11.482312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.482579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.482616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.482843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.482878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.482995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.483027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.483288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.483321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.483514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.483791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.483824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.483952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.483984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.484102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.484134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.484387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.484419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.484530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.484562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.484739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.484773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.485044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.485087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.485257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.485294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.485519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.485567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.485754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.124 [2024-12-13 12:42:11.485807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.124 qpair failed and we were unable to recover it. 00:36:44.124 [2024-12-13 12:42:11.485953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.485988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542722 00:36:44.125 [2024-12-13 12:42:11.486185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.486220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.486408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.486440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542722 00:36:44.125 [2024-12-13 12:42:11.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:44.125 [2024-12-13 12:42:11.486607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.486713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.486746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.486868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.486901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542722 ']' 00:36:44.125 [2024-12-13 12:42:11.487025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.487057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.487240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.125 [2024-12-13 12:42:11.487274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.487401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.487433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:44.125 [2024-12-13 12:42:11.487556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.487589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.487795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.487833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.125 [2024-12-13 12:42:11.487940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.487972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.488147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:44.125 [2024-12-13 12:42:11.488180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.488304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.125 [2024-12-13 12:42:11.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.488620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.488653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.488775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.488832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.489096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.489128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.489235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.489267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.489449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.489481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.489613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.489645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.489827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.489862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.490041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.490075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.490254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.490296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.490417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.490574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.490606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.490709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.490740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.490944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.490978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.491145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.491178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.491294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.491328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.491525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.491557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.491738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.491769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.491903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.125 [2024-12-13 12:42:11.491936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.125 qpair failed and we were unable to recover it. 00:36:44.125 [2024-12-13 12:42:11.492117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.492148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.492272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.492303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.492607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.492642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.492745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.492777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.492925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.492959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.493079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.493111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.493299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.493331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.493472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.493504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.493618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.493648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.493766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.493813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.493923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.493954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.494059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.494088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.494264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.494297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.494417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.494448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.494557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.494587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.494768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.494813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.495092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.495124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.495236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.495269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.495395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.495427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.495603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.495636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.495756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.495798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.496047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.496079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.496195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.496228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.496333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.496363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.496474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.496506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.496648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.496682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.496857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.496892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.497178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.497211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.497385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.497418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.497603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.497636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.497824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.497865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.497978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.498009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.498142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.498175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.498425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.498458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.498629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.498661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.498932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.498965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.499155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.499188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.499320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.499352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.499452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.126 [2024-12-13 12:42:11.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.126 qpair failed and we were unable to recover it. 00:36:44.126 [2024-12-13 12:42:11.499719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.499753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.499953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.499987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.500094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.500124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.500225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.500258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.500383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.500414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.500541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.500574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.500689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.500721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.500899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.500935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.501062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.501094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.501365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.501397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.501565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.501597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.501717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.501750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.501958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.501992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.502109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.502142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.502280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.502312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.502502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.502535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.502667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.502700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.502874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.502908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.503145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.503216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.503356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.503392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.503594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.503627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.503735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.503765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.503898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.503931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.504121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.504154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.504326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.504359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.504541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.504572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.504747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.504792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.504898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.504930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.505056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.505088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.505269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.505302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.505405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.505437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.505614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.505657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.505777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.505818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.505992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.506024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.506210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.506241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.506454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.506486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.506680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.506711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.506819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.506850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.127 [2024-12-13 12:42:11.506993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.127 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-13 12:42:11.507113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.507144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.507332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.507363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.507489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.507521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.507776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.507817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.507924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.507955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.508167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.508198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.508427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.508668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.508699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.508885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.508919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.509093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.509125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.509312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.509529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.509561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.509714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.509825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.509859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.509974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.510007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.510126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.510158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.510355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.510387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.510569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.510601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.510729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.510762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.511001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.511071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.511268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.511303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.511570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.511602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.511849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.511884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.512094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.512126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.512338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.512468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.512500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.512675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.512705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.512988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.513021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.513226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.513260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.513447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.513479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.513593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.513624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.513805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.513839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.514026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.514057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.514239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.514270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.514394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.514425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.514611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.514644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.514842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.514876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.515069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.515101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-13 12:42:11.515281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.128 [2024-12-13 12:42:11.515311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.515443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.515474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.515645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.515677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.515950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.515983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.516230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.516262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.516449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.516480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.516700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.516732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.516944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.516978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.517217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.517262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.517442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.517474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.517650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.517681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.517861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.517894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.518180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.518213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.518337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.518367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.518558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.518589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.518816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.518850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.518977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.519009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.519196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.519227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.519464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.519495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.519602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.519633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.519746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.519793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.520033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.520065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.520246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.520280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.520466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.520498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.520624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.520656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.520800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.520834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.521011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.521043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.521223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.521256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.521424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.521455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.521638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.521670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.521881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.521916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.522143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.522175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.522357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.522388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.522639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.129 [2024-12-13 12:42:11.522671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-13 12:42:11.522841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.522874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.522979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.523010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.523202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.523233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.523336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.523370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.523628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.523661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.523862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.523895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.524007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.524037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.524300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.524330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.524514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.524545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.524652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.524825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.524858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.524988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.525020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.525142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.525174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.525394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.525425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.525544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.525575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.525709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.525750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.525877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.525910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.526118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.526151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.526284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.526316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.526438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.526470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.526592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.526625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.526810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.526843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.526958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.526990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.527261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.527557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.527588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.527701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.527733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.527929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.527962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.528093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.528125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.528302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.528350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.528458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.528490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.528691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.528724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.528934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.528970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.529147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.529179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.529442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.529474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.529617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.529649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.529771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.529814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.529941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.529972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.530225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.530257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.530369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.530401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.130 [2024-12-13 12:42:11.530518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.130 [2024-12-13 12:42:11.530549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.130 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.530674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.530706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.530881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.530916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.531092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.531124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.531377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.531410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.531549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:44.131 [2024-12-13 12:42:11.531590] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.131 [2024-12-13 12:42:11.531604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.531635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.531767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.531808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.532047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.532078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.532284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.532315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.532431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.532461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.532598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.532629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.532815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.532855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.533047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.533079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.533297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.533490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.533523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.533703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.533735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.533915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.533948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.534198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.534230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.534335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.534367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.534481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.534513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.534799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.534833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.535039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.535072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.535176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.535208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.535452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.535483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.535697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.535729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.535942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.535981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.536097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.536128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.536330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.536362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.536481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.536520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.536709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.536741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.536872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.536913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.537022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.537055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.537313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.537345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.537456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.537488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.537675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.537707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.537896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.537928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.538104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.538136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.538240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.131 [2024-12-13 12:42:11.538272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.131 qpair failed and we were unable to recover it. 00:36:44.131 [2024-12-13 12:42:11.538393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.538424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.538606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.538638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.538907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.538941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.539073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.539105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.539278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.539310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.539547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.539578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.539690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.539722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.539904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.539936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.540109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.540140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.540253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.540284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.540459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.540490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.540618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.540650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.540923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.540957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.541204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.541237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.541429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.541460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.541703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.541735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.541941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.541975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.542171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.542203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.542390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.542713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.542746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.543002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.543035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.543213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.543245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.543367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.543399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.543515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.543546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.543833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.543866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.543974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.544006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.544117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.544149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.544271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.544302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.544513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.544635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.544666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.544802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.544854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.544971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.545005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.545218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.545251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.545359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.545393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.545569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.545602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.545724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.545757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.545981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.546013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.546205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.546236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.546339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.132 [2024-12-13 12:42:11.546370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.132 qpair failed and we were unable to recover it. 00:36:44.132 [2024-12-13 12:42:11.546469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.546499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.546714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.546747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.546880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.546913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.547095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.547127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.547241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.547272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.547482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.547516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.547689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.547721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.547860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.547894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.548071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.548103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.548340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.548372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.548551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.548582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.548694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.548725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.548856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.548893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.549084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.549116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.549365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.549397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.549499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.549531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.549645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.549676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.549852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.549886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.550088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.550120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.550253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.550285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.550399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.550431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.550654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.550686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.550814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.550848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.551031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.551063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.551314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.551346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.551541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.551572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.551673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.551705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.551814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.551846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.552016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.552049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.552286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.552317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.552534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.552566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.552739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.552776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.552921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.552954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.553082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.133 [2024-12-13 12:42:11.553114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.133 qpair failed and we were unable to recover it. 00:36:44.133 [2024-12-13 12:42:11.553228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.553261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.553552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.553584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.553701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.553732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.553936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.553969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.554078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.554109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.554231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.554262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.554416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.554599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.554630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.554808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.554841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.554993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.555096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.555127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.555254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.555286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.555552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.555584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.555765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.555805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.555983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.556015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.556155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.556186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.556291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.556322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.556459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.556490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.556666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.556698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.556827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.556861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.557049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.557080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.557189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.557221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.557423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.557454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.557698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.557730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.558016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.558061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.558194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.558226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.558414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.558447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.558635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.558668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.558855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.558887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.559071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.559101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.559284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.559315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.559567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.559598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.559718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.559749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.559892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.559925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.560136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.560166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.560284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.560315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.560492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.560523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.560700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.560732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.560854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.560888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.134 [2024-12-13 12:42:11.561067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.134 [2024-12-13 12:42:11.561099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.134 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.561274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.561306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.561492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.561524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.561715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.561747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.561882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.561916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.562043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.562074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.562247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.562278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.562516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.562548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.562742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.562774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.563069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.563101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.563285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.563317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.563584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.563615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.563796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.563835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.564109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.564141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.564259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.564291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.564475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.564506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.564705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.564737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.564920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.564953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.565128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.565159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.565345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.565377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.565568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.565598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.565793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.565826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.566068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.566100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.566271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.566302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.566435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.566466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.566656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.566687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.566866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.566901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.567022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.567245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.567277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.567401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.567433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.567607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.567638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.567845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.567879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.568012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.568043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.568162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.568193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.568387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.568418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.568611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.568643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.568893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.568926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.569134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.569166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.569372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.569404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.135 [2024-12-13 12:42:11.569669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.135 [2024-12-13 12:42:11.569706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.135 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.569949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.569982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.570163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.570195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.570369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.570401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.570501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.570533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.570723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.570755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.570872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.570904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.571023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.571055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.571294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.571325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.571560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.571592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.571729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.571761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.571898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.571930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.572111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.572143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.572331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.572363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.572558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.572590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.572708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.572740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.573019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.573054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.573227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.573258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.573440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.573471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.573643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.573676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.573857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.573890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.574080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.574111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.574241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.574284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.574522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.574552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.574819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.574868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.575064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.575097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.575376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.575582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.575614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.575742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.575774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.575906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.575938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.576200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.576237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.576360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.576392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.576613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.576644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.576818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.576850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.577088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.577120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.577302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.577334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.577520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.577552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.577725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.577757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.578042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.578075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.578280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.578312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.136 qpair failed and we were unable to recover it. 00:36:44.136 [2024-12-13 12:42:11.578502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.136 [2024-12-13 12:42:11.578534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.578709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.578748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.578959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.578992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.579102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.579134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.579255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.579406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.579437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.579697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.579728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.579870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.580025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.580057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.580241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.580272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.580473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.580504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.580801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.580834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.581076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.581108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.581288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.581320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.581559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.581590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.581771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.581823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.582127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.582159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.582283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.582314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.582576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.582607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.582739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.582771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.583054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.583086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.583300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.583331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.583589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.583620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.583885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.583919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.584089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.584120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.584371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.584401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.584507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.584538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.584660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.584691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.584862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.584901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.585094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.585125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.585310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.585341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.585515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.585546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.585726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.585757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.586006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.586039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.586232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.586281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.586490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.586521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.586700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.586732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.137 qpair failed and we were unable to recover it. 00:36:44.137 [2024-12-13 12:42:11.586929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.137 [2024-12-13 12:42:11.586961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.587132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.587164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.587429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.587461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.587653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.587685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.587808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.587841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.588078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.588150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.588311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.588345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.588479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.588512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.588715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.588746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.588873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.588907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.589104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.589137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.589325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.589357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.589475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.589507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.589627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.589659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.589853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.589886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.590057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.590088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.590255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.590286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.590467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.590500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.590725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.590980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.591013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.591249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.591281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.591492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.591709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.591740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.591891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.591924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.592042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.592080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.592276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.592307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.592497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.592528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.592653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.592684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.592801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.592834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.593095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.593126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.593260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.593292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.593416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.593447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.593638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.593670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.593847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.593881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.594059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.594091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.594275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.594306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.594436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.594466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.138 qpair failed and we were unable to recover it. 00:36:44.138 [2024-12-13 12:42:11.594596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.138 [2024-12-13 12:42:11.594627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.594831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.594864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.594979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.595011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.595247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.595279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.595452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.595483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.595653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.595685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.595879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.595912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.596056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.596087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.596288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.596320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.596510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.596541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.596675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.596707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.596949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.596997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.597188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.597219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.597415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.597447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.597633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.597664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.597887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.597920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.598041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.598073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.598264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.598294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.598413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.598444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.598617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.598648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.598755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.598795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.598921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.598957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.599145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.599177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.599357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.599389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.599589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.599793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.599827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.599995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.600027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.600205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.600236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.600500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.600532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.600730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.600762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.600960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.600992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.601213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.601244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.601428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.601460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.601578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.601610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.601816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.601849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.602049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.602082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.602193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.602223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.602348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.602380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.602552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.602584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.139 qpair failed and we were unable to recover it. 00:36:44.139 [2024-12-13 12:42:11.602833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.139 [2024-12-13 12:42:11.602865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.603049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.603081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.603271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.603303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.603495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.603526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.603776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.603817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.604003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.604035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.604141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.604172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.604298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.604330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.604567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.604598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.604884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.604954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.605122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.605160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.605334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.605366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.605553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.605583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.605770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.605816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.606011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.606043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.606160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.606192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.606315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.606346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.606528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.606560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.606751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.606792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.606904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.606937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.607174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.607206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.607450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.607482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.607612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.607645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.607892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.607926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.608041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.608074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.608256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.608287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.608408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.608439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.608608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.608640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.608918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.608951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.609084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.609303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.609335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.609456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.609488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.609658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.609689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.609956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.609989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.610170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.610202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.610356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:44.140 [2024-12-13 12:42:11.610456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.610487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.610597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.610632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.610918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.610952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.611093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.611127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.140 qpair failed and we were unable to recover it. 00:36:44.140 [2024-12-13 12:42:11.611261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.140 [2024-12-13 12:42:11.611294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.611431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.611464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.611710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.611742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.611889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.611923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.612100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.612133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.612250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.612283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.612524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.612558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.612736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.612769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.612909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.612942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.613046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.613079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.613277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.613322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.613494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.613526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.613714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.613748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.613911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.613954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.614150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.614184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.614378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.614585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.614621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.614757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.614805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.614925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.614959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.615146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.615181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.615417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.615451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.615643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.615676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.615811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.615847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.616093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.616126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.616322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.616357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.616597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.616631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.616883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.616922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.617108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.617143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.617318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.617353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.617491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.617525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.617737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.617771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.617934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.617970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.618151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.618185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.618452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.618486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.618688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.618722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.618853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.618889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.619072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.619106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6d8000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.619308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.619348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.619528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.619563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.141 qpair failed and we were unable to recover it. 00:36:44.141 [2024-12-13 12:42:11.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.141 [2024-12-13 12:42:11.619794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.620035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.620072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.620258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.620293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.620529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.620564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.620755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.620796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.620978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.621013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.621233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.621269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.621534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.621567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.621745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.621797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.621915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.621949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.622086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.622121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.622307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.622346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.622470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.622504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.622774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.622820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.623003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.623037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.623316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.623349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.623543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.623576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.623764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.623808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.624009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.624044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.624307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.624340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.624635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.624833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.624868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.625039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.625072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.625259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.625292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.625475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.625509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.625703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.625737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.625919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.625953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.626192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.626226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.626359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.626393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.626511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.626545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.626684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.626718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.626846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.626881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.627054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.627089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.627224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.142 [2024-12-13 12:42:11.627258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.142 qpair failed and we were unable to recover it. 00:36:44.142 [2024-12-13 12:42:11.627377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.627409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.627538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.627572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.627816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.627852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.628024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.628057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.628284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.628352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.628501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.628544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.628672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.628705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.628828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.628863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.629044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.629078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.629203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.629237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.629435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.629468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.629651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.629686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.629860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.629897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.630034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.630069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.630179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.630212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.630337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.630372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.630478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.630511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.630680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.630714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.630860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.630898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.631027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.631061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.631303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.631337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.631602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.631637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.631847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.631883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.632012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.632046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.632155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.632190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.632430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.632464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 [2024-12-13 12:42:11.632479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.632504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.143 [2024-12-13 12:42:11.632511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.143 [2024-12-13 12:42:11.632517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.143 [2024-12-13 12:42:11.632523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.143 [2024-12-13 12:42:11.632602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.632634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.632759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.632801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.633055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.633089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.633266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.633307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.633484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.633517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.633690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.633723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.633916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.633951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.633920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:44.143 [2024-12-13 12:42:11.634029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:44.143 [2024-12-13 12:42:11.634111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:44.143 [2024-12-13 12:42:11.634155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.634187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 [2024-12-13 12:42:11.634112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.634310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.634343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.634559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.634590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.143 [2024-12-13 12:42:11.634727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.143 [2024-12-13 12:42:11.634762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.143 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.635045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.635081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.635191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.635232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.635420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.635454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.635650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.635685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.635817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.635854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.636036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.636071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.636198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.636232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.636440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.636476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.636678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.636712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.636974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.637010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.637218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.637252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.637506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.637540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.637714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.637748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.637934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.638130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.638166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.638364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.638404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.638547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.638581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.638852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.638887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.639046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.639086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.639221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.639255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.639379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.639412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.639622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.639656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.639797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.639833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.640040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.640074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.640251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.640285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.640488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.640522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.640639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.640673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.640869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.640904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.641113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.641146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.641341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.641375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.641512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.641547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.641817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.641858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.642107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.642142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.642552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.642586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.642880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.642915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.643181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.643215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.643336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.643368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.144 qpair failed and we were unable to recover it. 00:36:44.144 [2024-12-13 12:42:11.643478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.144 [2024-12-13 12:42:11.643512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.643617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.643652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.643896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.643932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.644123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.644157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.644330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.644363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.644563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.644597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.644802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.644837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.644977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.645011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.645142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.645176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.645364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.645399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.645651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.645685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.645860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.645895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.646025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.646060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.646179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.646212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.646513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.646547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.646797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.646833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.647129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.647165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.647287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.647321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.647515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.647550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.647825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.647860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.648052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.648093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.648342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.648377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.648496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.648530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.648718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.648753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.648941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.648977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.649166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.649201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.649319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.649353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.649479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.649513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.649688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.649723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.649871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.649914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.650112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.650148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.650331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.650365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.650505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.650729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.650764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.651025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.651061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.651237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.651272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.651445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.651478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.651669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.651705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.651974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.145 [2024-12-13 12:42:11.652012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.145 qpair failed and we were unable to recover it. 00:36:44.145 [2024-12-13 12:42:11.652138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.652173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.652302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.652338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.652467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.652502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.652694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.652730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.652851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.652886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.653011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.653045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.653294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.653329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.653503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.653538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.653733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.653768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.653971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.654007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.654132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.654166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.654382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.654418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.654610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.654643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.654835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.654871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.654986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.655021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.655130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.655164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.655335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.655372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.655560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.655594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.655723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.655759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.656013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.656048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.656227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.656264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.656374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.656416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.656595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.656635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.656882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.656919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.657165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.657379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.657412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.657549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.657583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.657829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.657866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.658043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.658077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.658319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.658355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.658532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.658565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.658673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.658705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.658827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.659065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.659100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.146 [2024-12-13 12:42:11.659271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.146 [2024-12-13 12:42:11.659305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.146 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.659566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.659601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.659715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.659749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.659951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.659987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.660224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.660258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.660448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.660483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.660660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.660693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.660814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.660850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.661026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.661059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.661196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.661229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.661469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.661504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.661675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.661709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.661825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.661860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.662127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.662162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.662277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.662312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.662501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.662534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.662667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.662700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.662833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.662868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.663051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.663086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.663293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.663324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.663433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.663463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.663670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.663843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.663874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.664135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.664165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.664282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.664312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.664493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.664524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.664630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.664660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.664831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.664871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.665048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.665079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.665269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.665299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.665477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.665508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.665754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.665812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.665967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.666166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.666199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.666443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.666475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.666595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.666627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.666800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.666833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.666973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.667108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.667139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.147 qpair failed and we were unable to recover it. 00:36:44.147 [2024-12-13 12:42:11.667252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.147 [2024-12-13 12:42:11.667284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.667471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.667502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.667692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.667723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.667917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.667950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.668123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.668155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.668346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.668377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.668562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.668594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.668796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.668830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.669006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.669036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.669148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.669177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.669353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.669384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.669578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.669609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.669733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.669766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.669883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.669914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.670103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.670134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.670394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.670425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.670600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.670630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.670898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.670931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.671044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.671074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.671270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.671299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.671468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.671499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.671806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6e4000b90 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.672023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.672068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.672312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.672348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.672544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.672576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.672695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.672727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.672922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.672954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.673073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.673106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.673299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.673339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.673520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.673552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.673742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.673775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.674031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.674063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.674252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.674286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.674396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.674427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.674610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.674643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.674814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.674847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.675108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.675143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.675320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.675352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.675461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.148 [2024-12-13 12:42:11.675501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.148 qpair failed and we were unable to recover it. 00:36:44.148 [2024-12-13 12:42:11.675625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.675659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.675878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.675912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.676088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.676120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.676305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.676339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.676519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.676556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.676680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.676712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.676841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.676875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.677070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.677103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.677303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.677335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.677441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.677476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.677650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.677682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.677871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.677905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.678096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.678128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.678259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.678290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.678482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.678515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.678650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.678687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.678872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.678906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.679040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.679072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.679182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.679214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.679388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.679420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.679550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.679581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.679802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.679835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.680030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.680061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.680233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.680264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.680447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.680479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.680669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.680700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.680807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.680840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.680956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.680988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.681106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.681137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.681377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.681409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.681592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.681630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.681839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.681875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.682003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.682034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.682231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.682264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.682448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.682479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.682653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.682685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.682934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.682967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.683084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.683116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.683232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.683264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.149 [2024-12-13 12:42:11.683392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.149 [2024-12-13 12:42:11.683424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.149 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.683549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.683581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.683813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.683847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.684054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.684086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.684264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.684297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.684565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.684597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.684800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.684833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.684960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.684992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.685119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.685153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.685395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.685428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.685553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.685586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.685713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.685745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.685946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.685981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.686110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.686142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.686331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.686363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.686659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.686693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.686954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.686991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.687192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.687225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.687493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.687532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.687813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.687848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.687966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.687998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.688117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.688150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.688372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.688404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.688580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.688614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd56a0 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.688930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.688978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.689178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.689210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.689389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.689430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.689699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.689734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.689901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.689935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.690123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.690159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.690358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.690391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.690653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.690689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.690897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.690932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.691044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.691076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.691337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.691371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.691564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.691597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.691859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.691891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.692071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.692103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.692366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 [2024-12-13 12:42:11.692397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.150 qpair failed and we were unable to recover it. 00:36:44.150 [2024-12-13 12:42:11.692653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.150 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.150 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.151 Malloc0 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.151 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.151 [2024-12-13 12:42:11.807232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.411 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.411 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:44.411 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.411 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.411 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.411 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.412 [2024-12-13 12:42:11.832187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.412 12:42:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 541659 00:36:44.412 [2024-12-13 12:42:11.979936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff6dc000b90 with addr=10.0.0.2, port=4420 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:11.988347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:11.988468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:11.988518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:11.988542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:11.988564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:11.988618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:11.998242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:11.998337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:11.998369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:11.998384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:11.998397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:11.998428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.008260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.008344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.008362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.008371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.008380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.008401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.018276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.018341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.018354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.018362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.018368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.018382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.028263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.028315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.028328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.028334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.028341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.028355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.038263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.038318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.038332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.038343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.038349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.038364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.048297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.048352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.048366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.048373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.048379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.048394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.058382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.058479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.058492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.058499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.058506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.058521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.068380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.068437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.068450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.068457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.068464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.068479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.078370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.078429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.078443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.078450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.078457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.412 [2024-12-13 12:42:12.078475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.412 qpair failed and we were unable to recover it. 00:36:44.412 [2024-12-13 12:42:12.088427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.412 [2024-12-13 12:42:12.088501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.412 [2024-12-13 12:42:12.088515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.412 [2024-12-13 12:42:12.088522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.412 [2024-12-13 12:42:12.088528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.413 [2024-12-13 12:42:12.088542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.413 qpair failed and we were unable to recover it. 00:36:44.413 [2024-12-13 12:42:12.098413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.413 [2024-12-13 12:42:12.098510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.413 [2024-12-13 12:42:12.098525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.413 [2024-12-13 12:42:12.098532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.413 [2024-12-13 12:42:12.098538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.413 [2024-12-13 12:42:12.098554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.413 qpair failed and we were unable to recover it. 00:36:44.413 [2024-12-13 12:42:12.108398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.413 [2024-12-13 12:42:12.108459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.413 [2024-12-13 12:42:12.108473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.413 [2024-12-13 12:42:12.108480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.413 [2024-12-13 12:42:12.108486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.413 [2024-12-13 12:42:12.108501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.413 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.118423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.118481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.118497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.118505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.118513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.118529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.128500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.128562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.128576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.128583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.128589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.128604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.138559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.138618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.138631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.138638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.138645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.138660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.148564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.148618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.148631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.148637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.148644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.148659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.158577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.158632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.158645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.158652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.158658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.158673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.168617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.168673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.168686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.168696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.168703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.168717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.178659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.178723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.178735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.178742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.178749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.178763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.188690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.188744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.188758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.188764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.188771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.188789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.198697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.198753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.198767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.198774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.198784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.198799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.208713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.208774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.208791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.208797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.208804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.208824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.218745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.218804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.218818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.218824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.218831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.218846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.228792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.675 [2024-12-13 12:42:12.228853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.675 [2024-12-13 12:42:12.228866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.675 [2024-12-13 12:42:12.228874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.675 [2024-12-13 12:42:12.228880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.675 [2024-12-13 12:42:12.228895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.675 qpair failed and we were unable to recover it. 00:36:44.675 [2024-12-13 12:42:12.238823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.238877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.238891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.238897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.238904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.238919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.248846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.248902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.248915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.248923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.248929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.248944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.258937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.258996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.259009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.259016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.259023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.259038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.268940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.269027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.269040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.269047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.269053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.269068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.278986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.279036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.279049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.279056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.279062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.279077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.288930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.288984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.288998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.289004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.289011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.289026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.299045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.299117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.299135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.299142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.299148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.299163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.309074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.309155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.309169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.309176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.309182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.309196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.319002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.319064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.319077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.319084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.319090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.319105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.329013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.329069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.329082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.329089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.329095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.329110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.339096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.339156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.339169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.339175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.339185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.339201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.349073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.349146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.349160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.349167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.349173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.349187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.359180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.359243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.359257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.359263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.676 [2024-12-13 12:42:12.359269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.676 [2024-12-13 12:42:12.359284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.676 qpair failed and we were unable to recover it. 00:36:44.676 [2024-12-13 12:42:12.369240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.676 [2024-12-13 12:42:12.369321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.676 [2024-12-13 12:42:12.369336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.676 [2024-12-13 12:42:12.369343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.677 [2024-12-13 12:42:12.369349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.677 [2024-12-13 12:42:12.369364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.677 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.379160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.379218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.379232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.379239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.379245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.379260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.938 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.389242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.389298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.389312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.389318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.389325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.389341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.938 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.399264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.399317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.399330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.399337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.399343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.399358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.938 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.409344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.409398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.409412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.409419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.409425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.409441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.938 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.419264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.419326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.419339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.419346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.419352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.419368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.938 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.429319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.429376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.429393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.429400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.429407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.429421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.938 qpair failed and we were unable to recover it. 00:36:44.938 [2024-12-13 12:42:12.439408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.938 [2024-12-13 12:42:12.439506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.938 [2024-12-13 12:42:12.439520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.938 [2024-12-13 12:42:12.439527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.938 [2024-12-13 12:42:12.439534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.938 [2024-12-13 12:42:12.439548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.449420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.449498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.449512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.449519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.449526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.449542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.459419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.459484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.459497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.459504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.459510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.459524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.469412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.469474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.469487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.469494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.469503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.469518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.479537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.479603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.479616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.479623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.479629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.479644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.489574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.489644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.489658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.489664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.489671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.489686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.499608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.499666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.499680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.499686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.499693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.499707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.509613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.509671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.509684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.509691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.509697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.509712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.519617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.519670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.519683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.519690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.519696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.519711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.529673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.529735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.529748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.529755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.529761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.529776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.539687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.539742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.539755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.539762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.539768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.539786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.549767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.549876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.549890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.549897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.549903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.549917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.559702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.559758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.559771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.559778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.559788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.559803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.569752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.939 [2024-12-13 12:42:12.569813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.939 [2024-12-13 12:42:12.569826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.939 [2024-12-13 12:42:12.569833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.939 [2024-12-13 12:42:12.569839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.939 [2024-12-13 12:42:12.569854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.939 qpair failed and we were unable to recover it. 00:36:44.939 [2024-12-13 12:42:12.579740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.940 [2024-12-13 12:42:12.579805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.940 [2024-12-13 12:42:12.579819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.940 [2024-12-13 12:42:12.579826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.940 [2024-12-13 12:42:12.579832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.940 [2024-12-13 12:42:12.579846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.940 qpair failed and we were unable to recover it. 00:36:44.940 [2024-12-13 12:42:12.589829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.940 [2024-12-13 12:42:12.589885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.940 [2024-12-13 12:42:12.589899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.940 [2024-12-13 12:42:12.589905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.940 [2024-12-13 12:42:12.589911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.940 [2024-12-13 12:42:12.589926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.940 qpair failed and we were unable to recover it. 00:36:44.940 [2024-12-13 12:42:12.599861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.940 [2024-12-13 12:42:12.599912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.940 [2024-12-13 12:42:12.599924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.940 [2024-12-13 12:42:12.599934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.940 [2024-12-13 12:42:12.599940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.940 [2024-12-13 12:42:12.599955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.940 qpair failed and we were unable to recover it. 00:36:44.940 [2024-12-13 12:42:12.609887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.940 [2024-12-13 12:42:12.609944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.940 [2024-12-13 12:42:12.609958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.940 [2024-12-13 12:42:12.609964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.940 [2024-12-13 12:42:12.609971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.940 [2024-12-13 12:42:12.609986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.940 qpair failed and we were unable to recover it. 00:36:44.940 [2024-12-13 12:42:12.619959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.940 [2024-12-13 12:42:12.620014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.940 [2024-12-13 12:42:12.620027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.940 [2024-12-13 12:42:12.620034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.940 [2024-12-13 12:42:12.620040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.940 [2024-12-13 12:42:12.620056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.940 qpair failed and we were unable to recover it. 00:36:44.940 [2024-12-13 12:42:12.629940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.940 [2024-12-13 12:42:12.629993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.940 [2024-12-13 12:42:12.630007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.940 [2024-12-13 12:42:12.630013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.940 [2024-12-13 12:42:12.630021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:44.940 [2024-12-13 12:42:12.630036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:44.940 qpair failed and we were unable to recover it. 00:36:45.201 [2024-12-13 12:42:12.640579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.201 [2024-12-13 12:42:12.640643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.201 [2024-12-13 12:42:12.640656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.201 [2024-12-13 12:42:12.640663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.201 [2024-12-13 12:42:12.640670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.201 [2024-12-13 12:42:12.640688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.201 qpair failed and we were unable to recover it. 00:36:45.201 [2024-12-13 12:42:12.650054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.201 [2024-12-13 12:42:12.650110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.201 [2024-12-13 12:42:12.650123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.201 [2024-12-13 12:42:12.650130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.201 [2024-12-13 12:42:12.650136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.201 [2024-12-13 12:42:12.650152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.201 qpair failed and we were unable to recover it. 00:36:45.201 [2024-12-13 12:42:12.660109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.201 [2024-12-13 12:42:12.660164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.201 [2024-12-13 12:42:12.660177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.201 [2024-12-13 12:42:12.660184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.201 [2024-12-13 12:42:12.660191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.201 [2024-12-13 12:42:12.660205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.201 qpair failed and we were unable to recover it. 00:36:45.201 [2024-12-13 12:42:12.670108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.201 [2024-12-13 12:42:12.670168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.201 [2024-12-13 12:42:12.670181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.201 [2024-12-13 12:42:12.670188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.201 [2024-12-13 12:42:12.670195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.201 [2024-12-13 12:42:12.670210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.201 qpair failed and we were unable to recover it. 00:36:45.201 [2024-12-13 12:42:12.680155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.201 [2024-12-13 12:42:12.680206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.201 [2024-12-13 12:42:12.680219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.201 [2024-12-13 12:42:12.680226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.201 [2024-12-13 12:42:12.680232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.201 [2024-12-13 12:42:12.680247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.201 qpair failed and we were unable to recover it. 00:36:45.201 [2024-12-13 12:42:12.690114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.201 [2024-12-13 12:42:12.690170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.201 [2024-12-13 12:42:12.690183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.201 [2024-12-13 12:42:12.690190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.690196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.690211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.700162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.700219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.700232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.700239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.700246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.700260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.710178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.710262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.710275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.710283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.710289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.710303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.720162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.720245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.720258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.720265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.720271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.720286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.730257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.730313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.730327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.730342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.730348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.730363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.740266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.740320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.740333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.740340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.740346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.740361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.750291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.750344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.750357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.750364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.750370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.750385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.760316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.760368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.760380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.760387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.760393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.760408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.770333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.770395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.770407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.770414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.770420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.770438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.780373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.780445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.780458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.780465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.780471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.780486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.790459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.790515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.790529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.790536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.790542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.790558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.800453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.800522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.800536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.800542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.800548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.800564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.810488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.810539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.810552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.810559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.810565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.810580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.202 [2024-12-13 12:42:12.820489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.202 [2024-12-13 12:42:12.820543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.202 [2024-12-13 12:42:12.820556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.202 [2024-12-13 12:42:12.820562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.202 [2024-12-13 12:42:12.820568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.202 [2024-12-13 12:42:12.820583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.202 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.830441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.830504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.830517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.830524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.830530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.830544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.840548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.840604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.840617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.840624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.840630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.840643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.850579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.850634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.850648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.850655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.850661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.850676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.860610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.860665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.860681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.860687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.860694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.860708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.870694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.870802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.870816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.870823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.870829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.870844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.880670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.880735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.880748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.880755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.880761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.880775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.203 [2024-12-13 12:42:12.890685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.203 [2024-12-13 12:42:12.890737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.203 [2024-12-13 12:42:12.890750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.203 [2024-12-13 12:42:12.890757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.203 [2024-12-13 12:42:12.890763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.203 [2024-12-13 12:42:12.890778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.203 qpair failed and we were unable to recover it. 00:36:45.464 [2024-12-13 12:42:12.900715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.464 [2024-12-13 12:42:12.900790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.464 [2024-12-13 12:42:12.900804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.464 [2024-12-13 12:42:12.900812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.464 [2024-12-13 12:42:12.900821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.464 [2024-12-13 12:42:12.900837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.464 qpair failed and we were unable to recover it. 00:36:45.464 [2024-12-13 12:42:12.910775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.464 [2024-12-13 12:42:12.910860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.464 [2024-12-13 12:42:12.910897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.464 [2024-12-13 12:42:12.910907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.464 [2024-12-13 12:42:12.910914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.464 [2024-12-13 12:42:12.910942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.464 qpair failed and we were unable to recover it. 00:36:45.464 [2024-12-13 12:42:12.920829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.920926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.920940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.920947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.920953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.920969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.930802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.930873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.930887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.930894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.930900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.930916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.940870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.940927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.940939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.940946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.940952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.940967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.950879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.950935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.950948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.950955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.950962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.950977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.960896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.960962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.960976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.960984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.960991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.961006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.970919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.970972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.970985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.970992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.970998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.971014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.980995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.981051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.981065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.981071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.981077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.981092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:12.990987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:12.991045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:12.991061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:12.991068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:12.991074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:12.991089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:13.000928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:13.000991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:13.001005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:13.001012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:13.001018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:13.001033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:13.011032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:13.011112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:13.011125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:13.011131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:13.011138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:13.011152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:13.021082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:13.021155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:13.021168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:13.021175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:13.021181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:13.021197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:13.031096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:13.031147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:13.031160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:13.031167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:13.031176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:13.031191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:13.041118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:13.041171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.465 [2024-12-13 12:42:13.041184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.465 [2024-12-13 12:42:13.041190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.465 [2024-12-13 12:42:13.041197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.465 [2024-12-13 12:42:13.041212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.465 qpair failed and we were unable to recover it. 00:36:45.465 [2024-12-13 12:42:13.051147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.465 [2024-12-13 12:42:13.051201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.051213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.051220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.051226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.051241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.061171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.061237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.061249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.061256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.061263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.061277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.071198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.071257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.071270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.071276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.071283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.071298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.081265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.081322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.081335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.081343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.081350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.081364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.091252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.091311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.091324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.091330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.091337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.091351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.101346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.101451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.101465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.101471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.101477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.101492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.111292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.111360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.111374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.111381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.111388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.111402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.121368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.121426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.121439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.121446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.121452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.121467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.131361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.131413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.131426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.131432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.131439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.131453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.141403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.141470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.141483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.141490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.141496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.141511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.151429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.151485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.151498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.151504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.151511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.151525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.466 [2024-12-13 12:42:13.161483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.466 [2024-12-13 12:42:13.161540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.466 [2024-12-13 12:42:13.161553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.466 [2024-12-13 12:42:13.161563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.466 [2024-12-13 12:42:13.161569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.466 [2024-12-13 12:42:13.161584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.466 qpair failed and we were unable to recover it. 00:36:45.728 [2024-12-13 12:42:13.171477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.728 [2024-12-13 12:42:13.171527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.728 [2024-12-13 12:42:13.171541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.728 [2024-12-13 12:42:13.171547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.728 [2024-12-13 12:42:13.171554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.728 [2024-12-13 12:42:13.171569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.728 qpair failed and we were unable to recover it. 00:36:45.728 [2024-12-13 12:42:13.181538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.728 [2024-12-13 12:42:13.181621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.728 [2024-12-13 12:42:13.181634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.728 [2024-12-13 12:42:13.181642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.728 [2024-12-13 12:42:13.181648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.728 [2024-12-13 12:42:13.181663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.728 qpair failed and we were unable to recover it. 00:36:45.728 [2024-12-13 12:42:13.191542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.728 [2024-12-13 12:42:13.191591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.728 [2024-12-13 12:42:13.191605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.728 [2024-12-13 12:42:13.191611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.728 [2024-12-13 12:42:13.191618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.191633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.201500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.201564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.201577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.201584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.201590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.201609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.211585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.211639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.211653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.211660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.211666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.211681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.221633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.221691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.221705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.221711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.221718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.221733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.231616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.231669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.231683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.231690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.231696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.231711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.241679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.241736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.241750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.241757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.241763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.241778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.251701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.251756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.251769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.251776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.251788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.251804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.261688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.261744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.261758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.261764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.261771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.261789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.271774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.271840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.271853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.271860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.271866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.271881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.281791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.281867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.281880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.281887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.281893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.281907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.291729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.291793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.291807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.291816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.291823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.291837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.301856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.301922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.301935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.301942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.301949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.301964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.311880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.311939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.311953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.311959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.311966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.311981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.321912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.729 [2024-12-13 12:42:13.321968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.729 [2024-12-13 12:42:13.321981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.729 [2024-12-13 12:42:13.321988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.729 [2024-12-13 12:42:13.321994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.729 [2024-12-13 12:42:13.322009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.729 qpair failed and we were unable to recover it. 00:36:45.729 [2024-12-13 12:42:13.331933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.331995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.332008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.332015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.332021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.332039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.341982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.342048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.342062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.342069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.342075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.342091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.352005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.352102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.352115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.352122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.352128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.352143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.362060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.362115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.362128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.362134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.362140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.362155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.372075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.372149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.372162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.372169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.372175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.372189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.382086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.382145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.382159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.382166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.382173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.382187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.392109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.392166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.392179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.392185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.392192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.392206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.402136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.402203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.402216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.402223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.402229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.402243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.412155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.412208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.412221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.412228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.412234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.412248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.730 [2024-12-13 12:42:13.422115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.730 [2024-12-13 12:42:13.422173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.730 [2024-12-13 12:42:13.422188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.730 [2024-12-13 12:42:13.422195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.730 [2024-12-13 12:42:13.422201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.730 [2024-12-13 12:42:13.422216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.730 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.432255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.432313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.432327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.432333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.432340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.432354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.442277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.442366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.442379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.442386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.442392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.442406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.452204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.452268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.452281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.452288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.452294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.452309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.462313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.462377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.462390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.462397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.462407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.462421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.472337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.472390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.472403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.472410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.472416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.472431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.482371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.482423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.482437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.482443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.482450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.482464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.492384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.492442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.492455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.492462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.492468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.991 [2024-12-13 12:42:13.492483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.991 qpair failed and we were unable to recover it. 00:36:45.991 [2024-12-13 12:42:13.502436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.991 [2024-12-13 12:42:13.502491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.991 [2024-12-13 12:42:13.502504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.991 [2024-12-13 12:42:13.502511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.991 [2024-12-13 12:42:13.502518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.502532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.512450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.512531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.512545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.512552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.512558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.512573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.522485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.522535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.522548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.522555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.522561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.522577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.532540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.532592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.532605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.532613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.532619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.532635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.542558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.542631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.542646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.542652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.542660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.542675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.552554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.552613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.552633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.552640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.552646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.552662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.562622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.562702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.562716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.562724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.562730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.562745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.572637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.572688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.572701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.572707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.572713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.572729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.582596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.582651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.582665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.582671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.582678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.582693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.592683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.592740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.592754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.592760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.592769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.592788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.602741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.602800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.602814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.602821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.602827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.602842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.612726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.612787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.612800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.612807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.612813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.612828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.622832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.622938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.622952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.622959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.622966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.622981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.632815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.632869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.992 [2024-12-13 12:42:13.632883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.992 [2024-12-13 12:42:13.632890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.992 [2024-12-13 12:42:13.632896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.992 [2024-12-13 12:42:13.632911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.992 qpair failed and we were unable to recover it. 00:36:45.992 [2024-12-13 12:42:13.642763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.992 [2024-12-13 12:42:13.642822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.993 [2024-12-13 12:42:13.642836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.993 [2024-12-13 12:42:13.642843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.993 [2024-12-13 12:42:13.642849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.993 [2024-12-13 12:42:13.642864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.993 qpair failed and we were unable to recover it. 00:36:45.993 [2024-12-13 12:42:13.652860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.993 [2024-12-13 12:42:13.652913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.993 [2024-12-13 12:42:13.652926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.993 [2024-12-13 12:42:13.652932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.993 [2024-12-13 12:42:13.652939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.993 [2024-12-13 12:42:13.652953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.993 qpair failed and we were unable to recover it. 00:36:45.993 [2024-12-13 12:42:13.662824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.993 [2024-12-13 12:42:13.662880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.993 [2024-12-13 12:42:13.662894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.993 [2024-12-13 12:42:13.662901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.993 [2024-12-13 12:42:13.662907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.993 [2024-12-13 12:42:13.662922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.993 qpair failed and we were unable to recover it. 00:36:45.993 [2024-12-13 12:42:13.672845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.993 [2024-12-13 12:42:13.672901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.993 [2024-12-13 12:42:13.672914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.993 [2024-12-13 12:42:13.672920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.993 [2024-12-13 12:42:13.672927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.993 [2024-12-13 12:42:13.672942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.993 qpair failed and we were unable to recover it. 00:36:45.993 [2024-12-13 12:42:13.682935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.993 [2024-12-13 12:42:13.683042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.993 [2024-12-13 12:42:13.683056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.993 [2024-12-13 12:42:13.683063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.993 [2024-12-13 12:42:13.683069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:45.993 [2024-12-13 12:42:13.683084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:45.993 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.692956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.693011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.693025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.693032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.693038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.693054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.703040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.703112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.703126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.703133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.703139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.703153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.713085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.713151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.713165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.713171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.713178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.713192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.723000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.723080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.723092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.723103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.723109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.723124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.733092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.733148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.733161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.733168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.733174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.733189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.743055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.743112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.743125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.743132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.743139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.743153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.753093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.753154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.753167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.753174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.753180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.753196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.763111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.254 [2024-12-13 12:42:13.763163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.254 [2024-12-13 12:42:13.763176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.254 [2024-12-13 12:42:13.763183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.254 [2024-12-13 12:42:13.763190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.254 [2024-12-13 12:42:13.763208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.254 qpair failed and we were unable to recover it. 00:36:46.254 [2024-12-13 12:42:13.773239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.773289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.773303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.773309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.773316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.773331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.783171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.783229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.783242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.783249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.783255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.783270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.793275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.793329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.793342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.793349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.793355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.793369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.803224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.803314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.803327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.803333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.803339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.803353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.813247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.813302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.813315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.813322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.813328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.813343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.823281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.823338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.823350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.823357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.823363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.823378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.833369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.833430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.833444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.833452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.833458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.833473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.843391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.843454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.843467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.843474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.843480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.843495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.853416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.853469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.853485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.853492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.853499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.853513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.863429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.863499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.863512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.863519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.863526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.863540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.873550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.873614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.873627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.873634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.873641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.873656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.883462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.883548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.883561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.883568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.883574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.883589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.893492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.255 [2024-12-13 12:42:13.893540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.255 [2024-12-13 12:42:13.893553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.255 [2024-12-13 12:42:13.893559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.255 [2024-12-13 12:42:13.893566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.255 [2024-12-13 12:42:13.893583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.255 qpair failed and we were unable to recover it. 00:36:46.255 [2024-12-13 12:42:13.903590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.256 [2024-12-13 12:42:13.903647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.256 [2024-12-13 12:42:13.903661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.256 [2024-12-13 12:42:13.903668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.256 [2024-12-13 12:42:13.903674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.256 [2024-12-13 12:42:13.903689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.256 qpair failed and we were unable to recover it. 00:36:46.256 [2024-12-13 12:42:13.913624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.256 [2024-12-13 12:42:13.913679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.256 [2024-12-13 12:42:13.913693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.256 [2024-12-13 12:42:13.913699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.256 [2024-12-13 12:42:13.913705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.256 [2024-12-13 12:42:13.913720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.256 qpair failed and we were unable to recover it. 00:36:46.256 [2024-12-13 12:42:13.923677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.256 [2024-12-13 12:42:13.923743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.256 [2024-12-13 12:42:13.923756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.256 [2024-12-13 12:42:13.923764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.256 [2024-12-13 12:42:13.923770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.256 [2024-12-13 12:42:13.923787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.256 qpair failed and we were unable to recover it. 00:36:46.256 [2024-12-13 12:42:13.933717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.256 [2024-12-13 12:42:13.933765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.256 [2024-12-13 12:42:13.933778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.256 [2024-12-13 12:42:13.933790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.256 [2024-12-13 12:42:13.933797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.256 [2024-12-13 12:42:13.933812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.256 qpair failed and we were unable to recover it. 00:36:46.256 [2024-12-13 12:42:13.943688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.256 [2024-12-13 12:42:13.943744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.256 [2024-12-13 12:42:13.943757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.256 [2024-12-13 12:42:13.943764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.256 [2024-12-13 12:42:13.943770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.256 [2024-12-13 12:42:13.943789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.256 qpair failed and we were unable to recover it. 00:36:46.516 [2024-12-13 12:42:13.953766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.516 [2024-12-13 12:42:13.953823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.516 [2024-12-13 12:42:13.953837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.516 [2024-12-13 12:42:13.953844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.516 [2024-12-13 12:42:13.953851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.516 [2024-12-13 12:42:13.953865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.516 qpair failed and we were unable to recover it. 00:36:46.516 [2024-12-13 12:42:13.963787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:13.963840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:13.963853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:13.963860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:13.963866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:13.963881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:13.973791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:13.973844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:13.973857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:13.973863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:13.973870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:13.973884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:13.983759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:13.983850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:13.983867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:13.983873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:13.983880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:13.983895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:13.993868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:13.993924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:13.993937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:13.993945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:13.993951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:13.993966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.003872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.003925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.003938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.003945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.003951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.003966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.013908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.013964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.013977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.013985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.013991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.014007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.023946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.024002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.024015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.024022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.024032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.024047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.033966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.034018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.034031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.034038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.034044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.034059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.043994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.044048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.044061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.044068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.044075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.044089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.054013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.054070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.054083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.054090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.054097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.054112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.064074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.064131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.064144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.064150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.064157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.064171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.074083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.074137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.074150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.074157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.074163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.074178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.084108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.084160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.517 [2024-12-13 12:42:14.084173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.517 [2024-12-13 12:42:14.084180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.517 [2024-12-13 12:42:14.084187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.517 [2024-12-13 12:42:14.084201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.517 qpair failed and we were unable to recover it. 00:36:46.517 [2024-12-13 12:42:14.094144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.517 [2024-12-13 12:42:14.094195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.094208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.094215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.094222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.094237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.104196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.104290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.104303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.104310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.104316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.104331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.114125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.114227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.114243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.114250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.114256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.114271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.124221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.124276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.124289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.124295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.124301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.124316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.134296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.134351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.134363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.134370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.134376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.134390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.144302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.144359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.144372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.144378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.144385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.144399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.154319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.154377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.154391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.154400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.154406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.154421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.164387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.164442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.164455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.164461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.164467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.164482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.174349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.174402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.174415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.174421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.174428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.174442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.184463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.184522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.184535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.184542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.184548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.184562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.194429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.194508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.194521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.194528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.194534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.194549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.204472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.204529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.204542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.204549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.204556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.204570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.518 [2024-12-13 12:42:14.214493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.518 [2024-12-13 12:42:14.214550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.518 [2024-12-13 12:42:14.214563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.518 [2024-12-13 12:42:14.214570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.518 [2024-12-13 12:42:14.214577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.518 [2024-12-13 12:42:14.214592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.518 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.224539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.224607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.224621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.224627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.224634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.224649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.234476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.234549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.234562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.234569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.234575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.234592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.244576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.244630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.244643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.244651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.244657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.244673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.254658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.254716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.254729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.254737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.254743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.254758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.264669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.264727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.264740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.264746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.264753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.264767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.274677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.274735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.274748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.274755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.274762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.274776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.284691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.284741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.284754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.284764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.284770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.284789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.294717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.294771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.294787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.294794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.294800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.294815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.304761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.780 [2024-12-13 12:42:14.304847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.780 [2024-12-13 12:42:14.304860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.780 [2024-12-13 12:42:14.304867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.780 [2024-12-13 12:42:14.304873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.780 [2024-12-13 12:42:14.304888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.780 qpair failed and we were unable to recover it. 00:36:46.780 [2024-12-13 12:42:14.314714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.314770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.314787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.314794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.314800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.314815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.324786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.324879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.324892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.324898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.324904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.324926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.334849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.334909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.334922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.334929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.334936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.334951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.344944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.345036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.345049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.345056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.345062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.345077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.354899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.354955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.354968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.354975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.354982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.354997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.364929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.364981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.364994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.365002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.365008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.365023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.374957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.375013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.375027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.375033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.375040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.375055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.385000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.385063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.385076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.385083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.385089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.385105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.395025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.395082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.395095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.395101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.395108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.395122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.405065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.405118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.405131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.405138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.405144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.405159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.415073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.415127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.415143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.415150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.415157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.415171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.425122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.425178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.425192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.425199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.425205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.425220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.435145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.435198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.435210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.435217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.781 [2024-12-13 12:42:14.435223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.781 [2024-12-13 12:42:14.435238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.781 qpair failed and we were unable to recover it. 00:36:46.781 [2024-12-13 12:42:14.445162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.781 [2024-12-13 12:42:14.445239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.781 [2024-12-13 12:42:14.445252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.781 [2024-12-13 12:42:14.445259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.782 [2024-12-13 12:42:14.445265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.782 [2024-12-13 12:42:14.445280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.782 qpair failed and we were unable to recover it. 00:36:46.782 [2024-12-13 12:42:14.455188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.782 [2024-12-13 12:42:14.455243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.782 [2024-12-13 12:42:14.455256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.782 [2024-12-13 12:42:14.455262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.782 [2024-12-13 12:42:14.455269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.782 [2024-12-13 12:42:14.455287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.782 qpair failed and we were unable to recover it. 00:36:46.782 [2024-12-13 12:42:14.465227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.782 [2024-12-13 12:42:14.465285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.782 [2024-12-13 12:42:14.465299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.782 [2024-12-13 12:42:14.465306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.782 [2024-12-13 12:42:14.465312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.782 [2024-12-13 12:42:14.465327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.782 qpair failed and we were unable to recover it. 00:36:46.782 [2024-12-13 12:42:14.475298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.782 [2024-12-13 12:42:14.475352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.782 [2024-12-13 12:42:14.475365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.782 [2024-12-13 12:42:14.475372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.782 [2024-12-13 12:42:14.475378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:46.782 [2024-12-13 12:42:14.475393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:46.782 qpair failed and we were unable to recover it. 00:36:47.043 [2024-12-13 12:42:14.485273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.043 [2024-12-13 12:42:14.485328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.043 [2024-12-13 12:42:14.485342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.043 [2024-12-13 12:42:14.485349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.043 [2024-12-13 12:42:14.485356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.043 [2024-12-13 12:42:14.485371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.043 qpair failed and we were unable to recover it. 00:36:47.043 [2024-12-13 12:42:14.495316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.043 [2024-12-13 12:42:14.495405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.043 [2024-12-13 12:42:14.495418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.043 [2024-12-13 12:42:14.495425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.043 [2024-12-13 12:42:14.495431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.043 [2024-12-13 12:42:14.495445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.043 qpair failed and we were unable to recover it. 00:36:47.043 [2024-12-13 12:42:14.505275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.043 [2024-12-13 12:42:14.505327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.043 [2024-12-13 12:42:14.505340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.043 [2024-12-13 12:42:14.505347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.043 [2024-12-13 12:42:14.505353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.043 [2024-12-13 12:42:14.505368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.043 qpair failed and we were unable to recover it. 00:36:47.043 [2024-12-13 12:42:14.515368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.043 [2024-12-13 12:42:14.515425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.043 [2024-12-13 12:42:14.515438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.043 [2024-12-13 12:42:14.515444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.043 [2024-12-13 12:42:14.515451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.043 [2024-12-13 12:42:14.515465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.043 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.525432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.525499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.525513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.525519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.525525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.525540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.535427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.535480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.535493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.535500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.535507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.535522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.545464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.545519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.545536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.545542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.545549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.545564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.555508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.555564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.555577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.555584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.555590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.555606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.565511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.565560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.565573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.565580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.565586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.565601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.575553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.575646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.575661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.575667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.575674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.575689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.585564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.585621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.585635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.585642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.585652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.585668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.595594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.595653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.595666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.595673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.595680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.595696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.605627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.605688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.605702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.605709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.605717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.605732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.615643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.615697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.615711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.615717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.615724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.615739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.625679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.625736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.625750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.625756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.625762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.625778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.635661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.635718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.635732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.635738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.635744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.635759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.645732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.645789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.645803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.044 [2024-12-13 12:42:14.645810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.044 [2024-12-13 12:42:14.645816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.044 [2024-12-13 12:42:14.645831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.044 qpair failed and we were unable to recover it. 00:36:47.044 [2024-12-13 12:42:14.655835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.044 [2024-12-13 12:42:14.655911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.044 [2024-12-13 12:42:14.655924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.655930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.655937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.655951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.665841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.665899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.665911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.665918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.665924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.665939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.675869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.675927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.675943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.675950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.675956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.675971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.685926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.685981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.685994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.686001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.686008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.686023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.695868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.695925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.695938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.695945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.695952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.695966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.705916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.705985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.705998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.706005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.706011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.706027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.715934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.715993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.716006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.716016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.716022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.716037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.725954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.726032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.726046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.726053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.726059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.726073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.045 [2024-12-13 12:42:14.735972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.045 [2024-12-13 12:42:14.736026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.045 [2024-12-13 12:42:14.736039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.045 [2024-12-13 12:42:14.736045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.045 [2024-12-13 12:42:14.736052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.045 [2024-12-13 12:42:14.736068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.045 qpair failed and we were unable to recover it. 00:36:47.306 [2024-12-13 12:42:14.746054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.306 [2024-12-13 12:42:14.746112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.306 [2024-12-13 12:42:14.746125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.306 [2024-12-13 12:42:14.746133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.306 [2024-12-13 12:42:14.746139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.306 [2024-12-13 12:42:14.746154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.306 qpair failed and we were unable to recover it. 00:36:47.306 [2024-12-13 12:42:14.756088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.306 [2024-12-13 12:42:14.756144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.306 [2024-12-13 12:42:14.756158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.306 [2024-12-13 12:42:14.756164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.306 [2024-12-13 12:42:14.756171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.306 [2024-12-13 12:42:14.756186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.306 qpair failed and we were unable to recover it. 00:36:47.306 [2024-12-13 12:42:14.766068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.306 [2024-12-13 12:42:14.766124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.766138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.766144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.766151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.766165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.776090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.776142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.776155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.776162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.776168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.776183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.786123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.786181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.786195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.786201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.786208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.786223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.796155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.796210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.796224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.796230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.796237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.796251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.806205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.806275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.806288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.806295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.806301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.806317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.816243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.816297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.816310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.816316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.816323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.816338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.826240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.826299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.826311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.826318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.826324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.826339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.836261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.836325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.836338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.836345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.836351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.836366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.846305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.846363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.846376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.846386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.846392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.846406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.856333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.856387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.856402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.856412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.856421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.856436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.866369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.866430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.866443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.866451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.866457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.866471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.876383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.876442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.876454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.876461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.876467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.876482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.886434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.886513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.307 [2024-12-13 12:42:14.886527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.307 [2024-12-13 12:42:14.886534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.307 [2024-12-13 12:42:14.886540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.307 [2024-12-13 12:42:14.886558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.307 qpair failed and we were unable to recover it. 00:36:47.307 [2024-12-13 12:42:14.896440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.307 [2024-12-13 12:42:14.896502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.896516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.896523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.896529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.896544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.906478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.906533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.906546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.906553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.906560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.906575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.916487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.916544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.916557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.916564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.916570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.916585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.926536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.926589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.926602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.926609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.926615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.926630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.936548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.936604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.936617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.936623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.936630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.936644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.946580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.946642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.946665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.946673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.946679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.946699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.956615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.956671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.956685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.956691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.956698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.956713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.966620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.966672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.966686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.966693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.966699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.966714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.976645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.976704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.976722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.976729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.976736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.976750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.986691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.986752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.986767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.986776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.986787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.986802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.308 [2024-12-13 12:42:14.996720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.308 [2024-12-13 12:42:14.996776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.308 [2024-12-13 12:42:14.996794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.308 [2024-12-13 12:42:14.996802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.308 [2024-12-13 12:42:14.996808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.308 [2024-12-13 12:42:14.996824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.308 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.006738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.006795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.006809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.006816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.006823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.006838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.016775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.016833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.016847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.016853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.016863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.016878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.026791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.026848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.026861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.026867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.026874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.026889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.036840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.036897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.036910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.036917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.036923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.036938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.046880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.046974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.046987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.046993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.046999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.047014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.056881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.056933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.056946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.056953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.056959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.056975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.066892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.066951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.066964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.066971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.066977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.066992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.076991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.077045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.077059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.077065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.077072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.077087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.087011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.087064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.087078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.087085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.569 [2024-12-13 12:42:15.087091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.569 [2024-12-13 12:42:15.087105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.569 qpair failed and we were unable to recover it. 00:36:47.569 [2024-12-13 12:42:15.096958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.569 [2024-12-13 12:42:15.097010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.569 [2024-12-13 12:42:15.097024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.569 [2024-12-13 12:42:15.097031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.097038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.097053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.107014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.107092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.107108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.107116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.107122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.107137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.117154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.117239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.117252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.117259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.117265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.117279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.127147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.127207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.127220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.127228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.127233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.127248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.137160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.137218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.137232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.137239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.137245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.137260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.147090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.147147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.147160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.147167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.147176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.147191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.157188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.157252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.157266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.157273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.157279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.157293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.167208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.167264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.167277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.167283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.167289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.167304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.177230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.177278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.177291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.177298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.177304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.177318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.187274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.187329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.187343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.187349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.187356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.187371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.197258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.197314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.197328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.197334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.197341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.197355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.207312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.207365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.207379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.207385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.207392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.207406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.217278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.217370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.217383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.217390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.217396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.217411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.570 qpair failed and we were unable to recover it. 00:36:47.570 [2024-12-13 12:42:15.227320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.570 [2024-12-13 12:42:15.227376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.570 [2024-12-13 12:42:15.227389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.570 [2024-12-13 12:42:15.227396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.570 [2024-12-13 12:42:15.227402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.570 [2024-12-13 12:42:15.227417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.571 qpair failed and we were unable to recover it. 00:36:47.571 [2024-12-13 12:42:15.237420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.571 [2024-12-13 12:42:15.237473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.571 [2024-12-13 12:42:15.237489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.571 [2024-12-13 12:42:15.237495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.571 [2024-12-13 12:42:15.237502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.571 [2024-12-13 12:42:15.237516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.571 qpair failed and we were unable to recover it. 00:36:47.571 [2024-12-13 12:42:15.247380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.571 [2024-12-13 12:42:15.247433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.571 [2024-12-13 12:42:15.247446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.571 [2024-12-13 12:42:15.247453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.571 [2024-12-13 12:42:15.247459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.571 [2024-12-13 12:42:15.247473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.571 qpair failed and we were unable to recover it. 00:36:47.571 [2024-12-13 12:42:15.257462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.571 [2024-12-13 12:42:15.257561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.571 [2024-12-13 12:42:15.257574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.571 [2024-12-13 12:42:15.257581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.571 [2024-12-13 12:42:15.257587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.571 [2024-12-13 12:42:15.257602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.571 qpair failed and we were unable to recover it. 00:36:47.831 [2024-12-13 12:42:15.267554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.267611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.267625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.267632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.267639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.267654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.277577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.277655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.277668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.277678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.277684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.277699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.287490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.287547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.287561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.287568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.287575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.287591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.297609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.297690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.297704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.297711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.297717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.297732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.307559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.307616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.307629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.307636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.307642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.307658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.317588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.317647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.317661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.317668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.317675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.317690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.327701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.327753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.327767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.327773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.327779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.327798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.337649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.337700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.337713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.337720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.337727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.337742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.347767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.347841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.347855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.347862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.347868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.347884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.357704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.357759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.357773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.357784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.357791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.357806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.367752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.367812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.367826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.367833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.367839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.367854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.377814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.377888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.377902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.377908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.377915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.377930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.387867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.387925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.387938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.832 [2024-12-13 12:42:15.387945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.832 [2024-12-13 12:42:15.387952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.832 [2024-12-13 12:42:15.387967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.832 qpair failed and we were unable to recover it. 00:36:47.832 [2024-12-13 12:42:15.397820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.832 [2024-12-13 12:42:15.397874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.832 [2024-12-13 12:42:15.397887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.397894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.397901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.397916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.407911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.407984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.407997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.408006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.408013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.408027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.417957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.418013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.418026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.418034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.418040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.418055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.427990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.428046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.428059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.428066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.428073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.428088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.438004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.438058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.438071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.438078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.438084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.438099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.448044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.448099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.448112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.448119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.448125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.448143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.458067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.458122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.458135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.458142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.458148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.458163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.468095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.468150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.468163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.468170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.468176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.468191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.478119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.478175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.478188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.478195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.478201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.478216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.488168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.488224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.488237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.488243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.488250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:47.833 [2024-12-13 12:42:15.488264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.498168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.498254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.498299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.498319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.498336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:47.833 [2024-12-13 12:42:15.498375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.508256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.508340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.508376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.508393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.508421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:47.833 [2024-12-13 12:42:15.508447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.518230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.518309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.518331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.518342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.518352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:47.833 [2024-12-13 12:42:15.518385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.833 qpair failed and we were unable to recover it. 00:36:47.833 [2024-12-13 12:42:15.528314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.833 [2024-12-13 12:42:15.528406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.833 [2024-12-13 12:42:15.528419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.833 [2024-12-13 12:42:15.528425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.833 [2024-12-13 12:42:15.528431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:47.833 [2024-12-13 12:42:15.528446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.834 qpair failed and we were unable to recover it. 00:36:48.094 [2024-12-13 12:42:15.538270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.094 [2024-12-13 12:42:15.538332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.094 [2024-12-13 12:42:15.538351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.094 [2024-12-13 12:42:15.538358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.094 [2024-12-13 12:42:15.538364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.094 [2024-12-13 12:42:15.538379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.094 qpair failed and we were unable to recover it. 00:36:48.094 [2024-12-13 12:42:15.548364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.094 [2024-12-13 12:42:15.548432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.094 [2024-12-13 12:42:15.548445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.094 [2024-12-13 12:42:15.548452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.094 [2024-12-13 12:42:15.548458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.094 [2024-12-13 12:42:15.548473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.094 qpair failed and we were unable to recover it. 00:36:48.094 [2024-12-13 12:42:15.558346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.094 [2024-12-13 12:42:15.558400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.094 [2024-12-13 12:42:15.558414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.094 [2024-12-13 12:42:15.558420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.558427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.558441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.568374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.568428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.568441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.568447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.568454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.568468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.578388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.578455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.578469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.578476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.578482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.578500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.588429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.588486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.588501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.588508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.588514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.588528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.598455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.598514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.598527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.598533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.598540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.598553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.608474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.608527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.608543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.608550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.608557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.608571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.618505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.618558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.618572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.618579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.618585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.618600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.628553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.628612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.628626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.628633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.628639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.628653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.638612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.638714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.638729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.638736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.638742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.638757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.648577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.648632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.648645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.648652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.648658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.648673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.658537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.658592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.658606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.658613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.658619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.658633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.668669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.668726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.668743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.668751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.668757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.668771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.678659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.678711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.678725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.678732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.678739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.678754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.095 qpair failed and we were unable to recover it. 00:36:48.095 [2024-12-13 12:42:15.688688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.095 [2024-12-13 12:42:15.688741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.095 [2024-12-13 12:42:15.688756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.095 [2024-12-13 12:42:15.688763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.095 [2024-12-13 12:42:15.688769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.095 [2024-12-13 12:42:15.688790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.698704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.698761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.698776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.698786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.698795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.698810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.708741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.708802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.708816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.708823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.708830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.708847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.718776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.718850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.718864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.718871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.718878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.718892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.728815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.728880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.728893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.728900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.728906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.728921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.738813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.738865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.738880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.738887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.738893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.738907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.748874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.748935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.748948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.748955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.748962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.748976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.758880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.758931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.758945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.758952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.758958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.758973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.768941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.768993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.769007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.769013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.769020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.769034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.778930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.778987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.779003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.779010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.779017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.779032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.096 [2024-12-13 12:42:15.788908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.096 [2024-12-13 12:42:15.788962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.096 [2024-12-13 12:42:15.788977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.096 [2024-12-13 12:42:15.788984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.096 [2024-12-13 12:42:15.788991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.096 [2024-12-13 12:42:15.789005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.096 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.798996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.799049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.799066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.799073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.799080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.799094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.808941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.808997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.809011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.809018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.809025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.809039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.819032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.819096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.819110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.819117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.819123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.819137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.829080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.829135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.829149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.829156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.829163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.829178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.839157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.839218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.839232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.839240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.839246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.839263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.849120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.849192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.849207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.849213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.849219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.849234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.859157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.859234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.859247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.859254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.859261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.859275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.869195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.869257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.869271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.869278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.869284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.869298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.879214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.879270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.879283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.879290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.879297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.879310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.889154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.889211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.889226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.889234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.889240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.889255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.899287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.899340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.899353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.899360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.357 [2024-12-13 12:42:15.899366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.357 [2024-12-13 12:42:15.899381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.357 qpair failed and we were unable to recover it. 00:36:48.357 [2024-12-13 12:42:15.909250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.357 [2024-12-13 12:42:15.909319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.357 [2024-12-13 12:42:15.909333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.357 [2024-12-13 12:42:15.909339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.909345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.909360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.919310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.919364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.919377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.919384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.919391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.919405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.929349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.929403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.929417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.929427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.929433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.929448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.939395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.939467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.939482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.939489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.939495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.939509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.949414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.949482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.949496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.949503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.949509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.949523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.959449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.959509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.959523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.959530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.959536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.959550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.969475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.969534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.969547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.969554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.969561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.969578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.979468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.979521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.979534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.979540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.979547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.979561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.989570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.989623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.989638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.989644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.989651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.989666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:15.999540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:15.999593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:15.999607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:15.999614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:15.999621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:15.999635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:16.009571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:16.009627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:16.009641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:16.009647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:16.009654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:16.009668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:16.019628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:16.019711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:16.019726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:16.019733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:16.019739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:16.019753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:16.029637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:16.029702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:16.029716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:16.029723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:16.029729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.358 [2024-12-13 12:42:16.029743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.358 qpair failed and we were unable to recover it. 00:36:48.358 [2024-12-13 12:42:16.039695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.358 [2024-12-13 12:42:16.039760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.358 [2024-12-13 12:42:16.039775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.358 [2024-12-13 12:42:16.039785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.358 [2024-12-13 12:42:16.039792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.359 [2024-12-13 12:42:16.039807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.359 qpair failed and we were unable to recover it. 00:36:48.359 [2024-12-13 12:42:16.049684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.359 [2024-12-13 12:42:16.049741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.359 [2024-12-13 12:42:16.049755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.359 [2024-12-13 12:42:16.049761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.359 [2024-12-13 12:42:16.049768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.359 [2024-12-13 12:42:16.049786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.359 qpair failed and we were unable to recover it. 00:36:48.619 [2024-12-13 12:42:16.059713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.619 [2024-12-13 12:42:16.059773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.619 [2024-12-13 12:42:16.059789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.619 [2024-12-13 12:42:16.059800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.619 [2024-12-13 12:42:16.059806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.619 [2024-12-13 12:42:16.059821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.619 qpair failed and we were unable to recover it. 00:36:48.619 [2024-12-13 12:42:16.069744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.619 [2024-12-13 12:42:16.069801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.619 [2024-12-13 12:42:16.069814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.619 [2024-12-13 12:42:16.069821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.619 [2024-12-13 12:42:16.069828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.619 [2024-12-13 12:42:16.069842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.619 qpair failed and we were unable to recover it. 00:36:48.619 [2024-12-13 12:42:16.079771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.619 [2024-12-13 12:42:16.079836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.619 [2024-12-13 12:42:16.079849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.619 [2024-12-13 12:42:16.079856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.619 [2024-12-13 12:42:16.079862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.619 [2024-12-13 12:42:16.079876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.089802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.089867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.089881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.089888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.089894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.089908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.099823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.099879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.099893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.099900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.099907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.099925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.109860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.109917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.109931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.109938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.109944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.109958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.119817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.119883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.119896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.119903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.119910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.119924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.129939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.130001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.130014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.130022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.130028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.130042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.139947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.140003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.140018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.140025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.140031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.140045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.149979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.150051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.150067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.150073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.150080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.150094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.160004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.160060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.160073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.160080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.160086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.160100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.170032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.170088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.170101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.170108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.170115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.170128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.180050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.180134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.180148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.180155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.180161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.180175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.190007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.190061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.190075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.190085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.190091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.190106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.200136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.200192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.200206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.200212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.200218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.200232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.210181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.210239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.210252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.620 [2024-12-13 12:42:16.210259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.620 [2024-12-13 12:42:16.210265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.620 [2024-12-13 12:42:16.210279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.620 qpair failed and we were unable to recover it. 00:36:48.620 [2024-12-13 12:42:16.220154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.620 [2024-12-13 12:42:16.220220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.620 [2024-12-13 12:42:16.220234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.220241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.220248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.220262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.230199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.230254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.230267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.230273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.230280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.230296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.240215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.240318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.240333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.240340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.240346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.240361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.250241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.250295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.250308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.250315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.250321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.250335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.260280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.260337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.260350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.260356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.260363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.260376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.270318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.270375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.270388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.270395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.270402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.270416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.280341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.280413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.280426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.280433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.280439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.280453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.290385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.290438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.290452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.290459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.290466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.290480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.300387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.300449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.300462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.300469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.300476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.300490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.621 [2024-12-13 12:42:16.310434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.621 [2024-12-13 12:42:16.310494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.621 [2024-12-13 12:42:16.310507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.621 [2024-12-13 12:42:16.310514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.621 [2024-12-13 12:42:16.310521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.621 [2024-12-13 12:42:16.310535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.621 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.320471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.320569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.320583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.320593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.320599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.320614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.330541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.330641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.330655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.330662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.330668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.330682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.340539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.340588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.340602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.340609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.340615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.340630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.350582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.350651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.350666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.350673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.350679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.350694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.360577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.360643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.360657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.360664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.360670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.360687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.370605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.370666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.370680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.370686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.370692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.370707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.380564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.380650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.380664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.380671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.380677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.380691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.390658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.390713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.390728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.390734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.390741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.390756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.400620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.400676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.400690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.400696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.400703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.400717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.410752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.410826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.410840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.410847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.410853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.410868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.420736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.420793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.420806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.420814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.420821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.420835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.430766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.430828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.430842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.430848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.430855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.430869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.440796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.440851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.440865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.440872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.440878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.440892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.450795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.450864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.450878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.450889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.450895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.450909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.460762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.460819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.460833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.460839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.460845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.460860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.470876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.470958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.470971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.470978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.470984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.470998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.480902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.480960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.480973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.480979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.480986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.481000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.490876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.490932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.490947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.883 [2024-12-13 12:42:16.490954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.883 [2024-12-13 12:42:16.490960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.883 [2024-12-13 12:42:16.490977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.883 qpair failed and we were unable to recover it. 00:36:48.883 [2024-12-13 12:42:16.500942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.883 [2024-12-13 12:42:16.501041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.883 [2024-12-13 12:42:16.501054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.501061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.501067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.501081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.511026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.511108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.511122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.511129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.511135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.511149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.521028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.521077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.521090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.521097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.521103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.521117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.531072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.531130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.531143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.531150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.531157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.531171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.540996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.541095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.541110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.541116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.541123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.541138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.551055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.551134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.551148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.551155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.551161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.551175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.561076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.561130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.561145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.561151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.561158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.561172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:48.884 [2024-12-13 12:42:16.571091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.884 [2024-12-13 12:42:16.571179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.884 [2024-12-13 12:42:16.571193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.884 [2024-12-13 12:42:16.571200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.884 [2024-12-13 12:42:16.571205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:48.884 [2024-12-13 12:42:16.571220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.884 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.581173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.581226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.581240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.581253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.581259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.581274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.591263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.591321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.591336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.591343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.591350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.591365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.601198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.601259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.601272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.601279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.601286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.601301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.611264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.611342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.611356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.611362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.611368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.611383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.621330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.621386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.621400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.621406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.621413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.621430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.631329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.631385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.631400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.631407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.631413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.631428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.641373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.641427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.641442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.641448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.641455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.641469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.651311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.651405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.651419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.651426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.651432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.651446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.661393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.661457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.661471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.661478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.661484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.661498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.671429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.671486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.671499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.671506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.671512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.671526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.681506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.681559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.681572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.681579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.681585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.681600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.691523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.691591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.691606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.149 [2024-12-13 12:42:16.691612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.149 [2024-12-13 12:42:16.691619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.149 [2024-12-13 12:42:16.691634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.149 qpair failed and we were unable to recover it. 00:36:49.149 [2024-12-13 12:42:16.701551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.149 [2024-12-13 12:42:16.701605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.149 [2024-12-13 12:42:16.701619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.701625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.701632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.701647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.711546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.711650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.711663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.711673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.711679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.711694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.721629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.721682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.721695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.721702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.721708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.721723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.731618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.731706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.731720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.731727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.731735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.731749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.741663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.741747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.741762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.741770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.741777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.741795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.751608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.751666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.751680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.751688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.751695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.751712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.761659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.761724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.761739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.761746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.761753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.761767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.771795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.771883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.771899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.771906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.771913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.771929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.781731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.781798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.781812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.781820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.781826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.781840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.791776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.791840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.791855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.791862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.791868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.791883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.801825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.801936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.801950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.801957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.801963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.801978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.811779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.811843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.811856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.811863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.811870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.811886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.821854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.821911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.821925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.821932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.821938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.821953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.150 [2024-12-13 12:42:16.831902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.150 [2024-12-13 12:42:16.831961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.150 [2024-12-13 12:42:16.831975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.150 [2024-12-13 12:42:16.831982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.150 [2024-12-13 12:42:16.831989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.150 [2024-12-13 12:42:16.832004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.150 qpair failed and we were unable to recover it. 00:36:49.151 [2024-12-13 12:42:16.841867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.151 [2024-12-13 12:42:16.841924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.151 [2024-12-13 12:42:16.841939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.151 [2024-12-13 12:42:16.841949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.151 [2024-12-13 12:42:16.841956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.151 [2024-12-13 12:42:16.841971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.151 qpair failed and we were unable to recover it. 00:36:49.492 [2024-12-13 12:42:16.851969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.492 [2024-12-13 12:42:16.852036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.492 [2024-12-13 12:42:16.852054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.492 [2024-12-13 12:42:16.852061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.492 [2024-12-13 12:42:16.852068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.492 [2024-12-13 12:42:16.852084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.492 qpair failed and we were unable to recover it. 00:36:49.492 [2024-12-13 12:42:16.861986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.862040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.862054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.862061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.862068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.862083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.872014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.872074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.872087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.872094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.872101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.872115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.882043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.882123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.882137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.882143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.882150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.882168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.892059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.892126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.892141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.892148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.892154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.892169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.902027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.902082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.902095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.902102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.902108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.902122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.912127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.912183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.912197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.912204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.912210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.912224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.922171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.922224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.922238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.922244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.922251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.922265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.932168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.932237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.932251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.932258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.932264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.932278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.942198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.942253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.942267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.942274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.942281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.942295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.952229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.952323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.952337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.952343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.952349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.952364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.962271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.962322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.962335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.962341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.962347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.962362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.972270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.972326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.972340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.972350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.972356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.972370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.982258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.982343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.982357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.982364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.982370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.982384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:16.992362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:16.992435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:16.992450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:16.992457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:16.992463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:16.992479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:17.002349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:17.002409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:17.002422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:17.002429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:17.002436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:17.002451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:17.012416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:17.012472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:17.012486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:17.012493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:17.012500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:17.012518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:17.022410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:17.022486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:17.022499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:17.022506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:17.022512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:17.022527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:17.032429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:17.032484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:17.032498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:17.032505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:17.032511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:17.032526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:17.042522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.493 [2024-12-13 12:42:17.042578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.493 [2024-12-13 12:42:17.042592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.493 [2024-12-13 12:42:17.042599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.493 [2024-12-13 12:42:17.042605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.493 [2024-12-13 12:42:17.042619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.493 qpair failed and we were unable to recover it. 00:36:49.493 [2024-12-13 12:42:17.052502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.052560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.052574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.052581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.052587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.052602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.062565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.062640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.062654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.062660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.062667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.062682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.072568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.072639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.072652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.072659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.072665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.072679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.082624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.082677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.082690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.082696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.082703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.082717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.092619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.092723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.092738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.092744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.092751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.092765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.102670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.102758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.102772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.102787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.102794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.102808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.112693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.112749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.112762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.112768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.112775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.112793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.122753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.122813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.122827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.122834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.122840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.122855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.132732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.132789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.132802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.132810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.132816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.132831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.494 [2024-12-13 12:42:17.142771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.494 [2024-12-13 12:42:17.142836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.494 [2024-12-13 12:42:17.142851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.494 [2024-12-13 12:42:17.142858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.494 [2024-12-13 12:42:17.142864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.494 [2024-12-13 12:42:17.142883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.494 qpair failed and we were unable to recover it. 00:36:49.754 [2024-12-13 12:42:17.152843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.754 [2024-12-13 12:42:17.152900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.754 [2024-12-13 12:42:17.152914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.754 [2024-12-13 12:42:17.152921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.754 [2024-12-13 12:42:17.152927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.152942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.162863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.162920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.162933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.162940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.162946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.162961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.172867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.172924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.172937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.172945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.172951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.172966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.182917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.182974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.182987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.182994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.183001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.183014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.192935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.193001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.193015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.193023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.193029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.193043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.202955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.203013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.203026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.203033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.203040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.203054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.213011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.213068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.213081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.213088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.213095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.213109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.223002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.223057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.223070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.223077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.223083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.223097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.233038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.233099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.233112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.233122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.233128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.233143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.243066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.243123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.243137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.243145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.243151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.243166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.253081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.253158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.253171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.253178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.253185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.253199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.263197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.263256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.263270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.263277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.263283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.263296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.273195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.273293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.755 [2024-12-13 12:42:17.273307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.755 [2024-12-13 12:42:17.273314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.755 [2024-12-13 12:42:17.273320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.755 [2024-12-13 12:42:17.273337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.755 qpair failed and we were unable to recover it. 00:36:49.755 [2024-12-13 12:42:17.283184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.755 [2024-12-13 12:42:17.283243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.283257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.283264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.283271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.283285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.293243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.293296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.293310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.293317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.293324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.293338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.303235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.303291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.303305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.303312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.303318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.303333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.313241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.313310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.313324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.313330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.313337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.313350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.323297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.323364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.323377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.323384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.323391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.323404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.333349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.333428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.333441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.333448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.333454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.333468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.343337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.343390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.343404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.343411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.343418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.343432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.353386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.353473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.353487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.353493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.353499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.353513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.363403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.363460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.363473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.363483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.363489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.363504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.373431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.373487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.373500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.373507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.373514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.373528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.383434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.383535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.383548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.383554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.383561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.383574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.393489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.393549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.393564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.393570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.393576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.393591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.403503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.403561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.403575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.403583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.403590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.756 [2024-12-13 12:42:17.403607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.756 qpair failed and we were unable to recover it. 00:36:49.756 [2024-12-13 12:42:17.413532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.756 [2024-12-13 12:42:17.413586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.756 [2024-12-13 12:42:17.413598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.756 [2024-12-13 12:42:17.413605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.756 [2024-12-13 12:42:17.413612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.757 [2024-12-13 12:42:17.413627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.757 qpair failed and we were unable to recover it. 00:36:49.757 [2024-12-13 12:42:17.423563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.757 [2024-12-13 12:42:17.423637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.757 [2024-12-13 12:42:17.423651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.757 [2024-12-13 12:42:17.423658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.757 [2024-12-13 12:42:17.423664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.757 [2024-12-13 12:42:17.423678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.757 qpair failed and we were unable to recover it. 00:36:49.757 [2024-12-13 12:42:17.433647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.757 [2024-12-13 12:42:17.433752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.757 [2024-12-13 12:42:17.433766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.757 [2024-12-13 12:42:17.433773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.757 [2024-12-13 12:42:17.433779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.757 [2024-12-13 12:42:17.433798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.757 qpair failed and we were unable to recover it. 00:36:49.757 [2024-12-13 12:42:17.443657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:49.757 [2024-12-13 12:42:17.443720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:49.757 [2024-12-13 12:42:17.443733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:49.757 [2024-12-13 12:42:17.443740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:49.757 [2024-12-13 12:42:17.443747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:49.757 [2024-12-13 12:42:17.443762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.757 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.453677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.453756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.453770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.453777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.453787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.453801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.463683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.463741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.463755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.463762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.463768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.463785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.473723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.473778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.473795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.473803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.473809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.473824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.483740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.483827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.483843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.483850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.483856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.483872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.493763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.493822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.493835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.493846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.493852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.493866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.503793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.503849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.503863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.503870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.503877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.503892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.513812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.513881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.513894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.513901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.017 [2024-12-13 12:42:17.513908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.017 [2024-12-13 12:42:17.513922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.017 qpair failed and we were unable to recover it. 00:36:50.017 [2024-12-13 12:42:17.523801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.017 [2024-12-13 12:42:17.523895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.017 [2024-12-13 12:42:17.523909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.017 [2024-12-13 12:42:17.523916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.523922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.523936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.533859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.533912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.533926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.533933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.533940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.533960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.543901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.543954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.543967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.543974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.543980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.543994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.553863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.553919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.553933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.553940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.553946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.553960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.563934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.563988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.564002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.564008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.564014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.564028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.573991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.574045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.574058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.574065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.574070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.574084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.584014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.584082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.584097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.584105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.584111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.584126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.594202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.594271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.594286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.594293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.594299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.594314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.604115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.604199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.604213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.604220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.604227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.604241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.614124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.614192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.614206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.614213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.614220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.614234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.624161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.624241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.624255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.624265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.624271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.624285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.634181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.634239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.634253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.634260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.634266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.634280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.644200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.644258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.644271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.644277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.644284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.644298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.018 [2024-12-13 12:42:17.654268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.018 [2024-12-13 12:42:17.654332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.018 [2024-12-13 12:42:17.654345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.018 [2024-12-13 12:42:17.654352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.018 [2024-12-13 12:42:17.654358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.018 [2024-12-13 12:42:17.654374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.018 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.664263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.019 [2024-12-13 12:42:17.664345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.019 [2024-12-13 12:42:17.664359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.019 [2024-12-13 12:42:17.664366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.019 [2024-12-13 12:42:17.664372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.019 [2024-12-13 12:42:17.664390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.019 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.674289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.019 [2024-12-13 12:42:17.674344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.019 [2024-12-13 12:42:17.674357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.019 [2024-12-13 12:42:17.674363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.019 [2024-12-13 12:42:17.674370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.019 [2024-12-13 12:42:17.674384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.019 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.684300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.019 [2024-12-13 12:42:17.684355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.019 [2024-12-13 12:42:17.684369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.019 [2024-12-13 12:42:17.684376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.019 [2024-12-13 12:42:17.684382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.019 [2024-12-13 12:42:17.684397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.019 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.694333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.019 [2024-12-13 12:42:17.694420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.019 [2024-12-13 12:42:17.694433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.019 [2024-12-13 12:42:17.694439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.019 [2024-12-13 12:42:17.694446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdd56a0 00:36:50.019 [2024-12-13 12:42:17.694460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.019 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.704411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.019 [2024-12-13 12:42:17.704550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.019 [2024-12-13 12:42:17.704601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.019 [2024-12-13 12:42:17.704624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.019 [2024-12-13 12:42:17.704644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:50.019 [2024-12-13 12:42:17.704691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:50.019 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.714333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:50.019 [2024-12-13 12:42:17.714425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:50.019 [2024-12-13 12:42:17.714449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:50.019 [2024-12-13 12:42:17.714462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:50.019 [2024-12-13 12:42:17.714474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff6dc000b90 00:36:50.019 [2024-12-13 12:42:17.714502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:50.019 qpair failed and we were unable to recover it. 00:36:50.019 [2024-12-13 12:42:17.714625] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:50.019 A controller has encountered a failure and is being reset. 00:36:50.278 Controller properly reset. 00:36:50.278 Initializing NVMe Controllers 00:36:50.278 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:50.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:50.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:50.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:50.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:50.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:50.278 Initialization complete. Launching workers. 00:36:50.278 Starting thread on core 1 00:36:50.278 Starting thread on core 2 00:36:50.278 Starting thread on core 3 00:36:50.278 Starting thread on core 0 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:50.278 00:36:50.278 real 0m10.704s 00:36:50.278 user 0m19.328s 00:36:50.278 sys 0m4.705s 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:50.278 ************************************ 00:36:50.278 END TEST nvmf_target_disconnect_tc2 00:36:50.278 ************************************ 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.278 rmmod nvme_tcp 00:36:50.278 rmmod nvme_fabrics 00:36:50.278 rmmod nvme_keyring 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 542722 ']' 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 542722 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 542722 ']' 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 542722 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542722 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542722' 00:36:50.278 killing process with pid 542722 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 542722 00:36:50.278 12:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 542722 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.538 12:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.077 12:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.077 00:36:53.077 real 0m19.484s 00:36:53.077 user 0m46.852s 00:36:53.077 sys 0m9.559s 00:36:53.077 12:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.077 12:42:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:53.077 ************************************ 00:36:53.077 END TEST nvmf_target_disconnect 00:36:53.077 ************************************ 00:36:53.077 12:42:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:53.077 00:36:53.077 real 7m23.949s 00:36:53.077 user 16m52.597s 00:36:53.077 sys 2m8.413s 00:36:53.077 12:42:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.077 12:42:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.077 ************************************ 00:36:53.077 END TEST nvmf_host 00:36:53.077 ************************************ 00:36:53.077 12:42:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:53.077 12:42:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:53.077 12:42:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:53.077 12:42:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:53.077 12:42:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.077 12:42:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:53.077 ************************************ 00:36:53.077 START TEST nvmf_target_core_interrupt_mode 00:36:53.077 ************************************ 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:53.077 * Looking for test storage... 00:36:53.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.077 --rc genhtml_branch_coverage=1 00:36:53.077 --rc genhtml_function_coverage=1 00:36:53.077 --rc genhtml_legend=1 00:36:53.077 --rc geninfo_all_blocks=1 00:36:53.077 --rc geninfo_unexecuted_blocks=1 00:36:53.077 00:36:53.077 ' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.077 --rc genhtml_branch_coverage=1 00:36:53.077 --rc genhtml_function_coverage=1 00:36:53.077 --rc genhtml_legend=1 00:36:53.077 --rc geninfo_all_blocks=1 00:36:53.077 --rc geninfo_unexecuted_blocks=1 00:36:53.077 00:36:53.077 ' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.077 --rc genhtml_branch_coverage=1 00:36:53.077 --rc genhtml_function_coverage=1 00:36:53.077 --rc genhtml_legend=1 00:36:53.077 --rc geninfo_all_blocks=1 00:36:53.077 --rc geninfo_unexecuted_blocks=1 00:36:53.077 00:36:53.077 ' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.077 --rc genhtml_branch_coverage=1 00:36:53.077 --rc genhtml_function_coverage=1 00:36:53.077 --rc genhtml_legend=1 00:36:53.077 --rc geninfo_all_blocks=1 00:36:53.077 --rc geninfo_unexecuted_blocks=1 00:36:53.077 00:36:53.077 ' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.077 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:53.078 ************************************ 00:36:53.078 START TEST nvmf_abort 00:36:53.078 ************************************ 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:53.078 * Looking for test storage... 00:36:53.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.078 --rc genhtml_branch_coverage=1 00:36:53.078 --rc genhtml_function_coverage=1 00:36:53.078 --rc genhtml_legend=1 00:36:53.078 --rc geninfo_all_blocks=1 00:36:53.078 --rc geninfo_unexecuted_blocks=1 00:36:53.078 00:36:53.078 ' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.078 --rc genhtml_branch_coverage=1 00:36:53.078 --rc genhtml_function_coverage=1 00:36:53.078 --rc genhtml_legend=1 00:36:53.078 --rc geninfo_all_blocks=1 00:36:53.078 --rc geninfo_unexecuted_blocks=1 00:36:53.078 00:36:53.078 ' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.078 --rc genhtml_branch_coverage=1 00:36:53.078 --rc genhtml_function_coverage=1 00:36:53.078 --rc genhtml_legend=1 00:36:53.078 --rc geninfo_all_blocks=1 00:36:53.078 --rc geninfo_unexecuted_blocks=1 00:36:53.078 00:36:53.078 ' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:53.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.078 --rc genhtml_branch_coverage=1 00:36:53.078 --rc genhtml_function_coverage=1 00:36:53.078 --rc genhtml_legend=1 00:36:53.078 --rc geninfo_all_blocks=1 00:36:53.078 --rc geninfo_unexecuted_blocks=1 00:36:53.078 00:36:53.078 ' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.078 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:53.079 12:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:59.651 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:59.652 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:59.652 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:59.652 Found net devices under 0000:af:00.0: cvl_0_0 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:59.652 Found net devices under 0000:af:00.1: cvl_0_1 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:59.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:59.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:36:59.652 00:36:59.652 --- 10.0.0.2 ping statistics --- 00:36:59.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.652 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:59.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:59.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:36:59.652 00:36:59.652 --- 10.0.0.1 ping statistics --- 00:36:59.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.652 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=547207 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 547207 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 547207 ']' 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.652 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 [2024-12-13 12:42:26.655610] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:59.653 [2024-12-13 12:42:26.656567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:59.653 [2024-12-13 12:42:26.656603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:59.653 [2024-12-13 12:42:26.732856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:59.653 [2024-12-13 12:42:26.755426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:59.653 [2024-12-13 12:42:26.755460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:59.653 [2024-12-13 12:42:26.755467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:59.653 [2024-12-13 12:42:26.755473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:59.653 [2024-12-13 12:42:26.755479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:59.653 [2024-12-13 12:42:26.756699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:59.653 [2024-12-13 12:42:26.756819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.653 [2024-12-13 12:42:26.756820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:59.653 [2024-12-13 12:42:26.819760] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:59.653 [2024-12-13 12:42:26.820666] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:59.653 [2024-12-13 12:42:26.820877] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:59.653 [2024-12-13 12:42:26.821018] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 [2024-12-13 12:42:26.889491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 Malloc0 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 Delay0 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 [2024-12-13 12:42:26.985441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.653 12:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:59.653 [2024-12-13 12:42:27.104234] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:01.557 Initializing NVMe Controllers 00:37:01.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:01.557 controller IO queue size 128 less than required 00:37:01.557 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:01.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:01.557 Initialization complete. Launching workers. 00:37:01.557 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37524 00:37:01.557 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37581, failed to submit 66 00:37:01.557 success 37524, unsuccessful 57, failed 0 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.816 rmmod nvme_tcp 00:37:01.816 rmmod nvme_fabrics 00:37:01.816 rmmod nvme_keyring 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 547207 ']' 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 547207 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 547207 ']' 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 547207 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547207 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547207' 00:37:01.816 killing process with pid 547207 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 547207 00:37:01.816 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 547207 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:02.075 12:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:03.982 00:37:03.982 real 0m11.083s 00:37:03.982 user 0m10.572s 00:37:03.982 sys 0m5.612s 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.982 ************************************ 00:37:03.982 END TEST nvmf_abort 00:37:03.982 ************************************ 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.982 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:04.241 ************************************ 00:37:04.241 START TEST nvmf_ns_hotplug_stress 00:37:04.241 ************************************ 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:04.242 * Looking for test storage... 00:37:04.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:04.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.242 --rc genhtml_branch_coverage=1 00:37:04.242 --rc genhtml_function_coverage=1 00:37:04.242 --rc genhtml_legend=1 00:37:04.242 --rc geninfo_all_blocks=1 00:37:04.242 --rc geninfo_unexecuted_blocks=1 00:37:04.242 00:37:04.242 ' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:04.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.242 --rc genhtml_branch_coverage=1 00:37:04.242 --rc genhtml_function_coverage=1 00:37:04.242 --rc genhtml_legend=1 00:37:04.242 --rc geninfo_all_blocks=1 00:37:04.242 --rc geninfo_unexecuted_blocks=1 00:37:04.242 00:37:04.242 ' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:04.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.242 --rc genhtml_branch_coverage=1 00:37:04.242 --rc genhtml_function_coverage=1 00:37:04.242 --rc genhtml_legend=1 00:37:04.242 --rc geninfo_all_blocks=1 00:37:04.242 --rc geninfo_unexecuted_blocks=1 00:37:04.242 00:37:04.242 ' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:04.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.242 --rc genhtml_branch_coverage=1 00:37:04.242 --rc genhtml_function_coverage=1 00:37:04.242 --rc genhtml_legend=1 00:37:04.242 --rc geninfo_all_blocks=1 00:37:04.242 --rc geninfo_unexecuted_blocks=1 00:37:04.242 00:37:04.242 ' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:04.242 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:04.243 12:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:10.823 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:10.823 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:10.823 Found net devices under 0000:af:00.0: cvl_0_0 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:10.823 Found net devices under 0000:af:00.1: cvl_0_1 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:10.823 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:10.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:10.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:37:10.824 00:37:10.824 --- 10.0.0.2 ping statistics --- 00:37:10.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.824 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:10.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:10.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:37:10.824 00:37:10.824 --- 10.0.0.1 ping statistics --- 00:37:10.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.824 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=551104 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 551104 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 551104 ']' 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:10.824 12:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.824 [2024-12-13 12:42:37.927880] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:10.824 [2024-12-13 12:42:37.928754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:10.824 [2024-12-13 12:42:37.928792] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.824 [2024-12-13 12:42:38.003912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:10.824 [2024-12-13 12:42:38.025830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.824 [2024-12-13 12:42:38.025866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.824 [2024-12-13 12:42:38.025873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.824 [2024-12-13 12:42:38.025879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.824 [2024-12-13 12:42:38.025887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.824 [2024-12-13 12:42:38.027103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.824 [2024-12-13 12:42:38.027213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.824 [2024-12-13 12:42:38.027214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.824 [2024-12-13 12:42:38.090104] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:10.824 [2024-12-13 12:42:38.090974] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:10.824 [2024-12-13 12:42:38.091174] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:10.824 [2024-12-13 12:42:38.091330] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:10.824 [2024-12-13 12:42:38.324101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.824 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:11.084 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.084 [2024-12-13 12:42:38.732505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.084 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:11.343 12:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:11.602 Malloc0 00:37:11.602 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:11.860 Delay0 00:37:11.860 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.860 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:12.119 NULL1 00:37:12.119 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:12.378 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=551539 00:37:12.378 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:12.378 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:12.378 12:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.637 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.896 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:12.896 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:12.896 true 00:37:12.896 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:12.896 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.155 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.414 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:13.414 12:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:13.673 true 00:37:13.673 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:13.673 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.932 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.191 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:14.191 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:14.191 true 00:37:14.191 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:14.191 12:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.450 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.708 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:14.709 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:14.967 true 00:37:14.967 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:14.967 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.226 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.485 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:15.485 12:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:15.485 true 00:37:15.485 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:15.485 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.744 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.002 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:16.002 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:16.261 true 00:37:16.261 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:16.261 12:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.519 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.778 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:16.778 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:16.778 true 00:37:16.778 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:16.778 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.037 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.296 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:17.296 12:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:17.554 true 00:37:17.554 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:17.554 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.811 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.069 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:18.070 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:18.070 true 00:37:18.070 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:18.070 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.328 12:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.587 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:18.587 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:18.846 true 00:37:18.846 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:18.846 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.105 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.364 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:19.364 12:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:19.364 true 00:37:19.364 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:19.364 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.623 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.882 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:19.882 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:20.141 true 00:37:20.141 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:20.141 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.400 12:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.659 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:20.659 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:20.659 true 00:37:20.659 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:20.659 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.918 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.177 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:21.177 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:21.437 true 00:37:21.437 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:21.437 12:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.695 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.953 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:21.953 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:21.953 true 00:37:21.953 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:21.953 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.212 12:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.470 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:22.471 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:22.730 true 00:37:22.730 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:22.730 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.989 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.248 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:23.248 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:23.248 true 00:37:23.248 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:23.248 12:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.507 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.766 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:23.766 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:24.025 true 00:37:24.025 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:24.025 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.285 12:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.544 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:24.544 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:24.544 true 00:37:24.544 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:24.544 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.803 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.062 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:25.062 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:25.321 true 00:37:25.321 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:25.321 12:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.580 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.839 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:25.839 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:25.839 true 00:37:25.839 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:25.839 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.098 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.357 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:26.357 12:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:26.616 true 00:37:26.616 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:26.616 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.874 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.134 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:27.134 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:27.134 true 00:37:27.134 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:27.134 12:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.393 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.651 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:27.651 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:27.909 true 00:37:27.909 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:27.909 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.168 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.427 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:28.427 12:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:28.427 true 00:37:28.427 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:28.427 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.686 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.945 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:28.945 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:29.204 true 00:37:29.204 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:29.204 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.463 12:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.722 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:29.722 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:29.722 true 00:37:29.722 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:29.722 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.981 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.239 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:30.239 12:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:30.498 true 00:37:30.498 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:30.498 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.757 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.016 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:31.016 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:31.016 true 00:37:31.016 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:31.016 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.275 12:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.534 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:31.534 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:31.793 true 00:37:31.793 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:31.793 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.052 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.311 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:37:32.311 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:37:32.311 true 00:37:32.311 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:32.311 12:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.570 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.829 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:37:32.829 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:37:33.088 true 00:37:33.088 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:33.088 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.347 12:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:33.606 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:37:33.606 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:37:33.606 true 00:37:33.606 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:33.606 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.864 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.124 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:37:34.124 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:37:34.382 true 00:37:34.382 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:34.382 12:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.641 12:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.900 12:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:37:34.900 12:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:37:34.900 true 00:37:34.900 12:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:34.900 12:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.159 12:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:35.418 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:37:35.419 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:37:35.677 true 00:37:35.677 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:35.677 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.937 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.196 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:37:36.196 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:37:36.196 true 00:37:36.196 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:36.196 12:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.455 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.714 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:37:36.714 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:37:36.973 true 00:37:36.973 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:36.973 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:37.232 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:37.491 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:37:37.491 12:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:37:37.491 true 00:37:37.751 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:37.751 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:37.751 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.010 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:37:38.010 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:37:38.268 true 00:37:38.268 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:38.268 12:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.527 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.785 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:37:38.786 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:37:39.045 true 00:37:39.045 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:39.045 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:39.045 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:39.303 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:37:39.303 12:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:37:39.562 true 00:37:39.562 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:39.562 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:39.820 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.079 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:37:40.079 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:37:40.338 true 00:37:40.338 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:40.338 12:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.338 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.597 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:37:40.597 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:37:40.856 true 00:37:40.856 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:40.856 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.115 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.373 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:37:41.373 12:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:37:41.632 true 00:37:41.632 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:41.632 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.891 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.891 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:37:41.891 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:37:42.150 true 00:37:42.150 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:42.150 12:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:42.409 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:42.667 Initializing NVMe Controllers 00:37:42.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:42.667 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:37:42.667 Controller IO queue size 128, less than required. 00:37:42.667 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:42.667 WARNING: Some requested NVMe devices were skipped 00:37:42.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:42.667 Initialization complete. Launching workers. 00:37:42.667 ======================================================== 00:37:42.667 Latency(us) 00:37:42.667 Device Information : IOPS MiB/s Average min max 00:37:42.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28230.55 13.78 4533.91 1297.50 8650.66 00:37:42.667 ======================================================== 00:37:42.667 Total : 28230.55 13.78 4533.91 1297.50 8650.66 00:37:42.667 00:37:42.667 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:37:42.667 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:37:42.926 true 00:37:42.926 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551539 00:37:42.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (551539) - No such process 00:37:42.926 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 551539 00:37:42.926 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:42.926 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:43.185 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:43.185 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:43.185 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:43.185 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.185 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:43.444 null0 00:37:43.444 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.444 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.444 12:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:43.702 null1 00:37:43.702 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.702 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.702 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:43.702 null2 00:37:43.703 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.703 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.703 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:43.962 null3 00:37:43.962 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.962 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.962 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:44.221 null4 00:37:44.221 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:44.221 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:44.221 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:44.221 null5 00:37:44.480 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:44.480 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:44.480 12:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:44.480 null6 00:37:44.480 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:44.480 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:44.480 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:44.740 null7 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.740 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 556596 556598 556599 556601 556603 556605 556607 556609 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.741 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.000 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.259 12:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.518 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.777 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.037 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.296 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.297 12:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:46.555 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.814 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.073 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:47.332 12:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.601 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.602 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.864 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:48.123 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:48.382 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.382 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.382 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:48.640 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.899 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.900 rmmod nvme_tcp 00:37:48.900 rmmod nvme_fabrics 00:37:48.900 rmmod nvme_keyring 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 551104 ']' 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 551104 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 551104 ']' 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 551104 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551104 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551104' 00:37:48.900 killing process with pid 551104 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 551104 00:37:48.900 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 551104 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.159 12:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.695 00:37:51.695 real 0m47.089s 00:37:51.695 user 3m1.917s 00:37:51.695 sys 0m21.123s 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:51.695 ************************************ 00:37:51.695 END TEST nvmf_ns_hotplug_stress 00:37:51.695 ************************************ 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:51.695 ************************************ 00:37:51.695 START TEST nvmf_delete_subsystem 00:37:51.695 ************************************ 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:51.695 * Looking for test storage... 00:37:51.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.695 12:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.695 --rc genhtml_branch_coverage=1 00:37:51.695 --rc genhtml_function_coverage=1 00:37:51.695 --rc genhtml_legend=1 00:37:51.695 --rc geninfo_all_blocks=1 00:37:51.695 --rc geninfo_unexecuted_blocks=1 00:37:51.695 00:37:51.695 ' 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.695 --rc genhtml_branch_coverage=1 00:37:51.695 --rc genhtml_function_coverage=1 00:37:51.695 --rc genhtml_legend=1 00:37:51.695 --rc geninfo_all_blocks=1 00:37:51.695 --rc geninfo_unexecuted_blocks=1 00:37:51.695 00:37:51.695 ' 00:37:51.695 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.695 --rc genhtml_branch_coverage=1 00:37:51.695 --rc genhtml_function_coverage=1 00:37:51.695 --rc genhtml_legend=1 00:37:51.695 --rc geninfo_all_blocks=1 00:37:51.695 --rc geninfo_unexecuted_blocks=1 00:37:51.696 00:37:51.696 ' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.696 --rc genhtml_branch_coverage=1 00:37:51.696 --rc genhtml_function_coverage=1 00:37:51.696 --rc genhtml_legend=1 00:37:51.696 --rc geninfo_all_blocks=1 00:37:51.696 --rc geninfo_unexecuted_blocks=1 00:37:51.696 00:37:51.696 ' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.696 12:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:56.975 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:56.975 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:56.975 Found net devices under 0000:af:00.0: cvl_0_0 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.975 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:56.976 Found net devices under 0000:af:00.1: cvl_0_1 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.976 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:57.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:37:57.235 00:37:57.235 --- 10.0.0.2 ping statistics --- 00:37:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.235 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:57.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:37:57.235 00:37:57.235 --- 10.0.0.1 ping statistics --- 00:37:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.235 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:57.235 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=560886 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 560886 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 560886 ']' 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:57.236 12:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.495 [2024-12-13 12:43:24.943910] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:57.495 [2024-12-13 12:43:24.944843] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:57.495 [2024-12-13 12:43:24.944877] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:57.495 [2024-12-13 12:43:25.022008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:57.495 [2024-12-13 12:43:25.043048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.495 [2024-12-13 12:43:25.043085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.495 [2024-12-13 12:43:25.043093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.495 [2024-12-13 12:43:25.043099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.495 [2024-12-13 12:43:25.043104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.495 [2024-12-13 12:43:25.044196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.495 [2024-12-13 12:43:25.044197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.495 [2024-12-13 12:43:25.106440] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:57.495 [2024-12-13 12:43:25.106863] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:57.495 [2024-12-13 12:43:25.107127] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.495 [2024-12-13 12:43:25.181089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.495 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.754 [2024-12-13 12:43:25.213417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.754 NULL1 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.754 Delay0 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=560907 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:57.754 12:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:57.754 [2024-12-13 12:43:25.324164] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:59.656 12:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.656 12:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.656 12:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 starting I/O failed: -6 00:37:59.915 [2024-12-13 12:43:27.452211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494920 is same with the state(6) to be set 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Write completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.915 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 starting I/O failed: -6 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 [2024-12-13 12:43:27.453714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc578000c80 is same with the state(6) to be set 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Write completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:37:59.916 Read completed with error (sct=0, sc=8) 00:38:00.853 [2024-12-13 12:43:28.418717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d260 is same with the state(6) to be set 00:38:00.853 Write completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Write completed with error (sct=0, sc=8) 00:38:00.853 Write completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Write completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Write completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.853 Write completed with error (sct=0, sc=8) 00:38:00.853 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 [2024-12-13 12:43:28.455639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc57800d060 is same with the state(6) to be set 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 [2024-12-13 12:43:28.455769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc57800d800 is same with the state(6) to be set 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 [2024-12-13 12:43:28.456577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24945f0 is same with the state(6) to be set 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Write completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 Read completed with error (sct=0, sc=8) 00:38:00.854 [2024-12-13 12:43:28.457128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243fc60 is same with the state(6) to be set 00:38:00.854 Initializing NVMe Controllers 00:38:00.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:00.854 Controller IO queue size 128, less than required. 00:38:00.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:00.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:00.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:00.854 Initialization complete. Launching workers. 00:38:00.854 ======================================================== 00:38:00.854 Latency(us) 00:38:00.854 Device Information : IOPS MiB/s Average min max 00:38:00.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.37 0.08 901076.29 301.20 1011834.97 00:38:00.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.94 0.08 1002088.95 253.00 2001151.71 00:38:00.854 ======================================================== 00:38:00.854 Total : 325.31 0.16 950117.55 253.00 2001151.71 00:38:00.854 00:38:00.854 [2024-12-13 12:43:28.457518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243d260 (9): Bad file descriptor 00:38:00.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:00.854 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.854 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:00.854 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 560907 00:38:00.854 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 560907 00:38:01.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (560907) - No such process 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 560907 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 560907 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 560907 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:01.423 [2024-12-13 12:43:28.989236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:01.423 12:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.423 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=561575 00:38:01.423 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:01.423 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:01.423 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:01.423 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:01.423 [2024-12-13 12:43:29.068122] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:01.991 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:01.991 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:01.991 12:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.558 12:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:02.558 12:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:02.558 12:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.125 12:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.125 12:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:03.125 12:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.384 12:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.384 12:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:03.384 12:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.951 12:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.951 12:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:03.951 12:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:04.518 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:04.518 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:04.518 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:04.777 Initializing NVMe Controllers 00:38:04.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:04.777 Controller IO queue size 128, less than required. 00:38:04.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:04.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:04.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:04.777 Initialization complete. Launching workers. 00:38:04.777 ======================================================== 00:38:04.777 Latency(us) 00:38:04.777 Device Information : IOPS MiB/s Average min max 00:38:04.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002256.13 1000199.55 1040461.16 00:38:04.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004840.44 1000230.42 1042744.41 00:38:04.777 ======================================================== 00:38:04.777 Total : 256.00 0.12 1003548.29 1000199.55 1042744.41 00:38:04.777 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561575 00:38:05.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (561575) - No such process 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 561575 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:05.036 rmmod nvme_tcp 00:38:05.036 rmmod nvme_fabrics 00:38:05.036 rmmod nvme_keyring 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 560886 ']' 00:38:05.036 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 560886 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 560886 ']' 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 560886 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 560886 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 560886' 00:38:05.037 killing process with pid 560886 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 560886 00:38:05.037 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 560886 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.296 12:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:07.202 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:07.202 00:38:07.202 real 0m16.018s 00:38:07.202 user 0m26.212s 00:38:07.202 sys 0m5.934s 00:38:07.202 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.202 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.202 ************************************ 00:38:07.202 END TEST nvmf_delete_subsystem 00:38:07.202 ************************************ 00:38:07.461 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:07.461 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:07.461 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.461 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:07.461 ************************************ 00:38:07.461 START TEST nvmf_host_management 00:38:07.461 ************************************ 00:38:07.461 12:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:07.461 * Looking for test storage... 00:38:07.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:07.461 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.462 --rc genhtml_branch_coverage=1 00:38:07.462 --rc genhtml_function_coverage=1 00:38:07.462 --rc genhtml_legend=1 00:38:07.462 --rc geninfo_all_blocks=1 00:38:07.462 --rc geninfo_unexecuted_blocks=1 00:38:07.462 00:38:07.462 ' 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.462 --rc genhtml_branch_coverage=1 00:38:07.462 --rc genhtml_function_coverage=1 00:38:07.462 --rc genhtml_legend=1 00:38:07.462 --rc geninfo_all_blocks=1 00:38:07.462 --rc geninfo_unexecuted_blocks=1 00:38:07.462 00:38:07.462 ' 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.462 --rc genhtml_branch_coverage=1 00:38:07.462 --rc genhtml_function_coverage=1 00:38:07.462 --rc genhtml_legend=1 00:38:07.462 --rc geninfo_all_blocks=1 00:38:07.462 --rc geninfo_unexecuted_blocks=1 00:38:07.462 00:38:07.462 ' 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.462 --rc genhtml_branch_coverage=1 00:38:07.462 --rc genhtml_function_coverage=1 00:38:07.462 --rc genhtml_legend=1 00:38:07.462 --rc geninfo_all_blocks=1 00:38:07.462 --rc geninfo_unexecuted_blocks=1 00:38:07.462 00:38:07.462 ' 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:07.462 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:07.721 12:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:14.292 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:14.292 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:14.292 Found net devices under 0000:af:00.0: cvl_0_0 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:14.292 Found net devices under 0000:af:00.1: cvl_0_1 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:14.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:14.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:38:14.292 00:38:14.292 --- 10.0.0.2 ping statistics --- 00:38:14.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.292 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:14.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:14.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:38:14.292 00:38:14.292 --- 10.0.0.1 ping statistics --- 00:38:14.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.292 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:14.292 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:14.293 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:14.293 12:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=565488 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 565488 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565488 ']' 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 [2024-12-13 12:43:41.065544] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:14.293 [2024-12-13 12:43:41.066429] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:14.293 [2024-12-13 12:43:41.066464] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.293 [2024-12-13 12:43:41.141077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:14.293 [2024-12-13 12:43:41.164140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.293 [2024-12-13 12:43:41.164177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.293 [2024-12-13 12:43:41.164184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.293 [2024-12-13 12:43:41.164191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.293 [2024-12-13 12:43:41.164196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.293 [2024-12-13 12:43:41.165610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:14.293 [2024-12-13 12:43:41.165645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:14.293 [2024-12-13 12:43:41.165752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.293 [2024-12-13 12:43:41.165754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:14.293 [2024-12-13 12:43:41.228260] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:14.293 [2024-12-13 12:43:41.229338] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:14.293 [2024-12-13 12:43:41.229366] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:14.293 [2024-12-13 12:43:41.229771] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:14.293 [2024-12-13 12:43:41.229815] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 [2024-12-13 12:43:41.294574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 Malloc0 00:38:14.293 [2024-12-13 12:43:41.378618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=565593 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 565593 /var/tmp/bdevperf.sock 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565593 ']' 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:14.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.293 { 00:38:14.293 "params": { 00:38:14.293 "name": "Nvme$subsystem", 00:38:14.293 "trtype": "$TEST_TRANSPORT", 00:38:14.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.293 "adrfam": "ipv4", 00:38:14.293 "trsvcid": "$NVMF_PORT", 00:38:14.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.293 "hdgst": ${hdgst:-false}, 00:38:14.293 "ddgst": ${ddgst:-false} 00:38:14.293 }, 00:38:14.293 "method": "bdev_nvme_attach_controller" 00:38:14.293 } 00:38:14.293 EOF 00:38:14.293 )") 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.293 "params": { 00:38:14.293 "name": "Nvme0", 00:38:14.293 "trtype": "tcp", 00:38:14.293 "traddr": "10.0.0.2", 00:38:14.293 "adrfam": "ipv4", 00:38:14.293 "trsvcid": "4420", 00:38:14.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.293 "hdgst": false, 00:38:14.293 "ddgst": false 00:38:14.293 }, 00:38:14.293 "method": "bdev_nvme_attach_controller" 00:38:14.293 }' 00:38:14.293 [2024-12-13 12:43:41.476509] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:14.293 [2024-12-13 12:43:41.476559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565593 ] 00:38:14.293 [2024-12-13 12:43:41.553575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.293 [2024-12-13 12:43:41.576109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.293 Running I/O for 10 seconds... 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:38:14.293 12:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:38:14.551 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:38:14.551 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:14.551 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:14.551 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:14.551 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.551 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.812 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.812 [2024-12-13 12:43:42.294104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.812 [2024-12-13 12:43:42.294146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.812 [2024-12-13 12:43:42.294157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.813 [2024-12-13 12:43:42.294165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.813 [2024-12-13 12:43:42.294180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.813 [2024-12-13 12:43:42.294176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124bd40 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9370 is same with the state(6) to be set 00:38:14.813 [2024-12-13 12:43:42.294655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.813 [2024-12-13 12:43:42.294768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.813 [2024-12-13 12:43:42.294775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.294992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.814 [2024-12-13 12:43:42.295364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.814 [2024-12-13 12:43:42.295412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.814 [2024-12-13 12:43:42.295418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.815 [2024-12-13 12:43:42.295682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.295690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247ea0 is same with the state(6) to be set 00:38:14.815 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:14.815 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:14.815 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:14.815 [2024-12-13 12:43:42.296642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:14.815 task offset: 98304 on job bdev=Nvme0n1 fails 00:38:14.815 00:38:14.815 Latency(us) 00:38:14.815 [2024-12-13T11:43:42.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.815 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:14.815 Job: Nvme0n1 ended in about 0.40 seconds with error 00:38:14.815 Verification LBA range: start 0x0 length 0x400 00:38:14.815 Nvme0n1 : 0.40 1899.27 118.70 158.27 0.00 30285.05 3542.06 26838.55 00:38:14.815 [2024-12-13T11:43:42.515Z] =================================================================================================================== 00:38:14.815 [2024-12-13T11:43:42.515Z] Total : 1899.27 118.70 158.27 0.00 30285.05 3542.06 26838.55 00:38:14.815 [2024-12-13 12:43:42.298987] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:14.815 [2024-12-13 12:43:42.299011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124bd40 (9): Bad file descriptor 00:38:14.815 [2024-12-13 12:43:42.300015] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:38:14.815 [2024-12-13 12:43:42.300091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:14.815 [2024-12-13 12:43:42.300114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.815 [2024-12-13 12:43:42.300130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:38:14.815 [2024-12-13 12:43:42.300139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:38:14.815 [2024-12-13 12:43:42.300147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:14.815 [2024-12-13 12:43:42.300153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x124bd40 00:38:14.815 [2024-12-13 12:43:42.300173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124bd40 (9): Bad file descriptor 00:38:14.815 [2024-12-13 12:43:42.300185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:38:14.815 [2024-12-13 12:43:42.300196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:38:14.815 [2024-12-13 12:43:42.300204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:38:14.815 [2024-12-13 12:43:42.300212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:38:14.815 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.815 12:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 565593 00:38:15.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (565593) - No such process 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:15.752 { 00:38:15.752 "params": { 00:38:15.752 "name": "Nvme$subsystem", 00:38:15.752 "trtype": "$TEST_TRANSPORT", 00:38:15.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:15.752 "adrfam": "ipv4", 00:38:15.752 "trsvcid": "$NVMF_PORT", 00:38:15.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:15.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:15.752 "hdgst": ${hdgst:-false}, 00:38:15.752 "ddgst": ${ddgst:-false} 00:38:15.752 }, 00:38:15.752 "method": "bdev_nvme_attach_controller" 00:38:15.752 } 00:38:15.752 EOF 00:38:15.752 )") 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:15.752 12:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:15.752 "params": { 00:38:15.752 "name": "Nvme0", 00:38:15.752 "trtype": "tcp", 00:38:15.752 "traddr": "10.0.0.2", 00:38:15.752 "adrfam": "ipv4", 00:38:15.752 "trsvcid": "4420", 00:38:15.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:15.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:15.752 "hdgst": false, 00:38:15.752 "ddgst": false 00:38:15.752 }, 00:38:15.752 "method": "bdev_nvme_attach_controller" 00:38:15.752 }' 00:38:15.752 [2024-12-13 12:43:43.360154] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:15.752 [2024-12-13 12:43:43.360198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565990 ] 00:38:15.752 [2024-12-13 12:43:43.434809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.011 [2024-12-13 12:43:43.457160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.270 Running I/O for 1 seconds... 00:38:17.206 1984.00 IOPS, 124.00 MiB/s 00:38:17.206 Latency(us) 00:38:17.206 [2024-12-13T11:43:44.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.206 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:17.206 Verification LBA range: start 0x0 length 0x400 00:38:17.206 Nvme0n1 : 1.01 2030.60 126.91 0.00 0.00 31022.85 7021.71 27337.87 00:38:17.206 [2024-12-13T11:43:44.906Z] =================================================================================================================== 00:38:17.206 [2024-12-13T11:43:44.906Z] Total : 2030.60 126.91 0.00 0.00 31022.85 7021.71 27337.87 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:17.465 rmmod nvme_tcp 00:38:17.465 rmmod nvme_fabrics 00:38:17.465 rmmod nvme_keyring 00:38:17.465 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 565488 ']' 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 565488 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 565488 ']' 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 565488 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565488 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565488' 00:38:17.465 killing process with pid 565488 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 565488 00:38:17.465 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 565488 00:38:17.725 [2024-12-13 12:43:45.210141] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:17.725 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.681 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:19.682 00:38:19.682 real 0m12.347s 00:38:19.682 user 0m18.459s 00:38:19.682 sys 0m6.241s 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:19.682 ************************************ 00:38:19.682 END TEST nvmf_host_management 00:38:19.682 ************************************ 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.682 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:19.958 ************************************ 00:38:19.958 START TEST nvmf_lvol 00:38:19.958 ************************************ 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:19.958 * Looking for test storage... 00:38:19.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:19.958 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:19.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.959 --rc genhtml_branch_coverage=1 00:38:19.959 --rc genhtml_function_coverage=1 00:38:19.959 --rc genhtml_legend=1 00:38:19.959 --rc geninfo_all_blocks=1 00:38:19.959 --rc geninfo_unexecuted_blocks=1 00:38:19.959 00:38:19.959 ' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:19.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.959 --rc genhtml_branch_coverage=1 00:38:19.959 --rc genhtml_function_coverage=1 00:38:19.959 --rc genhtml_legend=1 00:38:19.959 --rc geninfo_all_blocks=1 00:38:19.959 --rc geninfo_unexecuted_blocks=1 00:38:19.959 00:38:19.959 ' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:19.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.959 --rc genhtml_branch_coverage=1 00:38:19.959 --rc genhtml_function_coverage=1 00:38:19.959 --rc genhtml_legend=1 00:38:19.959 --rc geninfo_all_blocks=1 00:38:19.959 --rc geninfo_unexecuted_blocks=1 00:38:19.959 00:38:19.959 ' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:19.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.959 --rc genhtml_branch_coverage=1 00:38:19.959 --rc genhtml_function_coverage=1 00:38:19.959 --rc genhtml_legend=1 00:38:19.959 --rc geninfo_all_blocks=1 00:38:19.959 --rc geninfo_unexecuted_blocks=1 00:38:19.959 00:38:19.959 ' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:19.959 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:19.960 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:19.960 12:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:26.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:26.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:26.629 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:26.630 Found net devices under 0000:af:00.0: cvl_0_0 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:26.630 Found net devices under 0000:af:00.1: cvl_0_1 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:26.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:26.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:38:26.630 00:38:26.630 --- 10.0.0.2 ping statistics --- 00:38:26.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.630 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:26.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:26.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:38:26.630 00:38:26.630 --- 10.0.0.1 ping statistics --- 00:38:26.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.630 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=569694 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 569694 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 569694 ']' 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:26.630 [2024-12-13 12:43:53.487616] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:26.630 [2024-12-13 12:43:53.488533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:26.630 [2024-12-13 12:43:53.488567] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.630 [2024-12-13 12:43:53.566380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:26.630 [2024-12-13 12:43:53.588776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.630 [2024-12-13 12:43:53.588857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.630 [2024-12-13 12:43:53.588864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.630 [2024-12-13 12:43:53.588873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.630 [2024-12-13 12:43:53.588878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.630 [2024-12-13 12:43:53.590132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.630 [2024-12-13 12:43:53.590237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.630 [2024-12-13 12:43:53.590239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:26.630 [2024-12-13 12:43:53.653229] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:26.630 [2024-12-13 12:43:53.654041] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:26.630 [2024-12-13 12:43:53.654487] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:26.630 [2024-12-13 12:43:53.654585] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:26.630 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:26.631 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:26.631 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:26.631 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.631 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:26.631 [2024-12-13 12:43:53.890947] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.631 12:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:26.631 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:26.631 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:26.890 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:26.890 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:26.890 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:27.149 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fc3546e4-2505-4051-b39b-89c2305107ee 00:38:27.149 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc3546e4-2505-4051-b39b-89c2305107ee lvol 20 00:38:27.408 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cb0104e2-46cd-4a8f-a6d1-e5f40dd70454 00:38:27.409 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:27.667 12:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb0104e2-46cd-4a8f-a6d1-e5f40dd70454 00:38:27.667 12:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:27.926 [2024-12-13 12:43:55.522799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.926 12:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:28.185 12:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=569959 00:38:28.185 12:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:28.185 12:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:29.121 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cb0104e2-46cd-4a8f-a6d1-e5f40dd70454 MY_SNAPSHOT 00:38:29.379 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7a172182-8362-4769-b031-911ccf4c961c 00:38:29.379 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cb0104e2-46cd-4a8f-a6d1-e5f40dd70454 30 00:38:29.638 12:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7a172182-8362-4769-b031-911ccf4c961c MY_CLONE 00:38:29.896 12:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5ee65ffc-600c-40d9-9a6a-9bb8308a8487 00:38:29.896 12:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5ee65ffc-600c-40d9-9a6a-9bb8308a8487 00:38:30.463 12:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 569959 00:38:38.582 Initializing NVMe Controllers 00:38:38.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:38.582 Controller IO queue size 128, less than required. 00:38:38.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:38.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:38.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:38.582 Initialization complete. Launching workers. 00:38:38.582 ======================================================== 00:38:38.582 Latency(us) 00:38:38.582 Device Information : IOPS MiB/s Average min max 00:38:38.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12369.43 48.32 10348.99 2073.59 48114.19 00:38:38.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12477.03 48.74 10259.20 3298.28 39813.74 00:38:38.582 ======================================================== 00:38:38.582 Total : 24846.46 97.06 10303.90 2073.59 48114.19 00:38:38.582 00:38:38.582 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:38.841 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb0104e2-46cd-4a8f-a6d1-e5f40dd70454 00:38:38.841 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc3546e4-2505-4051-b39b-89c2305107ee 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:39.100 rmmod nvme_tcp 00:38:39.100 rmmod nvme_fabrics 00:38:39.100 rmmod nvme_keyring 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 569694 ']' 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 569694 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 569694 ']' 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 569694 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569694 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569694' 00:38:39.100 killing process with pid 569694 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 569694 00:38:39.100 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 569694 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.359 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.895 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:41.895 00:38:41.895 real 0m21.669s 00:38:41.895 user 0m55.539s 00:38:41.895 sys 0m9.601s 00:38:41.895 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.895 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:41.895 ************************************ 00:38:41.895 END TEST nvmf_lvol 00:38:41.895 ************************************ 00:38:41.895 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:41.895 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:41.896 ************************************ 00:38:41.896 START TEST nvmf_lvs_grow 00:38:41.896 ************************************ 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:41.896 * Looking for test storage... 00:38:41.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.896 --rc genhtml_branch_coverage=1 00:38:41.896 --rc genhtml_function_coverage=1 00:38:41.896 --rc genhtml_legend=1 00:38:41.896 --rc geninfo_all_blocks=1 00:38:41.896 --rc geninfo_unexecuted_blocks=1 00:38:41.896 00:38:41.896 ' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.896 --rc genhtml_branch_coverage=1 00:38:41.896 --rc genhtml_function_coverage=1 00:38:41.896 --rc genhtml_legend=1 00:38:41.896 --rc geninfo_all_blocks=1 00:38:41.896 --rc geninfo_unexecuted_blocks=1 00:38:41.896 00:38:41.896 ' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.896 --rc genhtml_branch_coverage=1 00:38:41.896 --rc genhtml_function_coverage=1 00:38:41.896 --rc genhtml_legend=1 00:38:41.896 --rc geninfo_all_blocks=1 00:38:41.896 --rc geninfo_unexecuted_blocks=1 00:38:41.896 00:38:41.896 ' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.896 --rc genhtml_branch_coverage=1 00:38:41.896 --rc genhtml_function_coverage=1 00:38:41.896 --rc genhtml_legend=1 00:38:41.896 --rc geninfo_all_blocks=1 00:38:41.896 --rc geninfo_unexecuted_blocks=1 00:38:41.896 00:38:41.896 ' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.896 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:41.897 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:48.467 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.467 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:48.468 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:48.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:48.468 Found net devices under 0000:af:00.0: cvl_0_0 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:48.468 Found net devices under 0000:af:00.1: cvl_0_1 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.468 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.468 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:38:48.468 00:38:48.468 --- 10.0.0.2 ping statistics --- 00:38:48.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.469 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:38:48.469 00:38:48.469 --- 10.0.0.1 ping statistics --- 00:38:48.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.469 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=575190 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 575190 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 575190 ']' 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:48.469 [2024-12-13 12:44:15.223734] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.469 [2024-12-13 12:44:15.224617] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:48.469 [2024-12-13 12:44:15.224651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.469 [2024-12-13 12:44:15.311561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.469 [2024-12-13 12:44:15.332811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.469 [2024-12-13 12:44:15.332845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.469 [2024-12-13 12:44:15.332852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.469 [2024-12-13 12:44:15.332858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.469 [2024-12-13 12:44:15.332864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.469 [2024-12-13 12:44:15.333346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.469 [2024-12-13 12:44:15.395334] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:48.469 [2024-12-13 12:44:15.395531] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:48.469 [2024-12-13 12:44:15.625991] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:48.469 ************************************ 00:38:48.469 START TEST lvs_grow_clean 00:38:48.469 ************************************ 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:48.469 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:48.469 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:38:48.469 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:38:48.469 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:48.729 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:48.729 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:48.729 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b lvol 150 00:38:48.987 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c961134-8d12-4f75-b26c-7198bd6ddf5d 00:38:48.987 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:48.987 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:49.246 [2024-12-13 12:44:16.697723] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:49.246 [2024-12-13 12:44:16.697873] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:49.246 true 00:38:49.246 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:38:49.246 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:49.246 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:49.246 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:49.505 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c961134-8d12-4f75-b26c-7198bd6ddf5d 00:38:49.764 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:49.764 [2024-12-13 12:44:17.442195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.764 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:50.022 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:50.022 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=575565 00:38:50.022 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:50.022 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 575565 /var/tmp/bdevperf.sock 00:38:50.023 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 575565 ']' 00:38:50.023 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:50.023 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.023 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:50.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:50.023 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.023 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:50.023 [2024-12-13 12:44:17.666153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:50.023 [2024-12-13 12:44:17.666199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575565 ] 00:38:50.282 [2024-12-13 12:44:17.739243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.282 [2024-12-13 12:44:17.762665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.282 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.282 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:50.282 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:50.541 Nvme0n1 00:38:50.541 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:50.800 [ 00:38:50.800 { 00:38:50.800 "name": "Nvme0n1", 00:38:50.800 "aliases": [ 00:38:50.800 "0c961134-8d12-4f75-b26c-7198bd6ddf5d" 00:38:50.800 ], 00:38:50.800 "product_name": "NVMe disk", 00:38:50.800 "block_size": 4096, 00:38:50.800 "num_blocks": 38912, 00:38:50.800 "uuid": "0c961134-8d12-4f75-b26c-7198bd6ddf5d", 00:38:50.800 "numa_id": 1, 00:38:50.800 "assigned_rate_limits": { 00:38:50.800 "rw_ios_per_sec": 0, 00:38:50.800 "rw_mbytes_per_sec": 0, 00:38:50.800 "r_mbytes_per_sec": 0, 00:38:50.800 "w_mbytes_per_sec": 0 00:38:50.800 }, 00:38:50.800 "claimed": false, 00:38:50.800 "zoned": false, 00:38:50.800 "supported_io_types": { 00:38:50.800 "read": true, 00:38:50.800 "write": true, 00:38:50.800 "unmap": true, 00:38:50.800 "flush": true, 00:38:50.800 "reset": true, 00:38:50.800 "nvme_admin": true, 00:38:50.800 "nvme_io": true, 00:38:50.800 "nvme_io_md": false, 00:38:50.800 "write_zeroes": true, 00:38:50.800 "zcopy": false, 00:38:50.800 "get_zone_info": false, 00:38:50.800 "zone_management": false, 00:38:50.800 "zone_append": false, 00:38:50.800 "compare": true, 00:38:50.800 "compare_and_write": true, 00:38:50.800 "abort": true, 00:38:50.800 "seek_hole": false, 00:38:50.800 "seek_data": false, 00:38:50.800 "copy": true, 00:38:50.800 "nvme_iov_md": false 00:38:50.800 }, 00:38:50.800 "memory_domains": [ 00:38:50.800 { 00:38:50.800 "dma_device_id": "system", 00:38:50.800 "dma_device_type": 1 00:38:50.800 } 00:38:50.800 ], 00:38:50.800 "driver_specific": { 00:38:50.800 "nvme": [ 00:38:50.800 { 00:38:50.800 "trid": { 00:38:50.800 "trtype": "TCP", 00:38:50.800 "adrfam": "IPv4", 00:38:50.800 "traddr": "10.0.0.2", 00:38:50.800 "trsvcid": "4420", 00:38:50.800 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:50.800 }, 00:38:50.800 "ctrlr_data": { 00:38:50.800 "cntlid": 1, 00:38:50.800 "vendor_id": "0x8086", 00:38:50.800 "model_number": "SPDK bdev Controller", 00:38:50.800 "serial_number": "SPDK0", 00:38:50.800 "firmware_revision": "25.01", 00:38:50.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:50.800 "oacs": { 00:38:50.800 "security": 0, 00:38:50.800 "format": 0, 00:38:50.800 "firmware": 0, 00:38:50.800 "ns_manage": 0 00:38:50.800 }, 00:38:50.800 "multi_ctrlr": true, 00:38:50.800 "ana_reporting": false 00:38:50.800 }, 00:38:50.800 "vs": { 00:38:50.800 "nvme_version": "1.3" 00:38:50.800 }, 00:38:50.800 "ns_data": { 00:38:50.800 "id": 1, 00:38:50.800 "can_share": true 00:38:50.800 } 00:38:50.800 } 00:38:50.800 ], 00:38:50.800 "mp_policy": "active_passive" 00:38:50.800 } 00:38:50.800 } 00:38:50.800 ] 00:38:50.800 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=575684 00:38:50.800 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:50.800 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:50.800 Running I/O for 10 seconds... 00:38:51.737 Latency(us) 00:38:51.737 [2024-12-13T11:44:19.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:51.737 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:51.737 [2024-12-13T11:44:19.437Z] =================================================================================================================== 00:38:51.737 [2024-12-13T11:44:19.437Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:51.737 00:38:52.673 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:38:52.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:52.673 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:52.673 [2024-12-13T11:44:20.373Z] =================================================================================================================== 00:38:52.673 [2024-12-13T11:44:20.373Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:52.673 00:38:52.932 true 00:38:52.932 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:38:52.932 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:53.191 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:53.191 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:53.191 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 575684 00:38:53.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:53.759 Nvme0n1 : 3.00 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:38:53.759 [2024-12-13T11:44:21.459Z] =================================================================================================================== 00:38:53.759 [2024-12-13T11:44:21.459Z] Total : 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:38:53.759 00:38:54.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:54.696 Nvme0n1 : 4.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:54.696 [2024-12-13T11:44:22.396Z] =================================================================================================================== 00:38:54.696 [2024-12-13T11:44:22.396Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:54.696 00:38:56.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:56.074 Nvme0n1 : 5.00 23393.40 91.38 0.00 0.00 0.00 0.00 0.00 00:38:56.074 [2024-12-13T11:44:23.774Z] =================================================================================================================== 00:38:56.074 [2024-12-13T11:44:23.774Z] Total : 23393.40 91.38 0.00 0.00 0.00 0.00 0.00 00:38:56.074 00:38:57.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.010 Nvme0n1 : 6.00 23452.67 91.61 0.00 0.00 0.00 0.00 0.00 00:38:57.010 [2024-12-13T11:44:24.710Z] =================================================================================================================== 00:38:57.010 [2024-12-13T11:44:24.710Z] Total : 23452.67 91.61 0.00 0.00 0.00 0.00 0.00 00:38:57.010 00:38:57.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.954 Nvme0n1 : 7.00 23496.71 91.78 0.00 0.00 0.00 0.00 0.00 00:38:57.954 [2024-12-13T11:44:25.654Z] =================================================================================================================== 00:38:57.954 [2024-12-13T11:44:25.654Z] Total : 23496.71 91.78 0.00 0.00 0.00 0.00 0.00 00:38:57.954 00:38:58.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.890 Nvme0n1 : 8.00 23544.12 91.97 0.00 0.00 0.00 0.00 0.00 00:38:58.890 [2024-12-13T11:44:26.590Z] =================================================================================================================== 00:38:58.890 [2024-12-13T11:44:26.590Z] Total : 23544.12 91.97 0.00 0.00 0.00 0.00 0.00 00:38:58.890 00:38:59.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.827 Nvme0n1 : 9.00 23566.89 92.06 0.00 0.00 0.00 0.00 0.00 00:38:59.827 [2024-12-13T11:44:27.527Z] =================================================================================================================== 00:38:59.827 [2024-12-13T11:44:27.527Z] Total : 23566.89 92.06 0.00 0.00 0.00 0.00 0.00 00:38:59.827 00:39:00.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.764 Nvme0n1 : 10.00 23585.10 92.13 0.00 0.00 0.00 0.00 0.00 00:39:00.764 [2024-12-13T11:44:28.464Z] =================================================================================================================== 00:39:00.764 [2024-12-13T11:44:28.464Z] Total : 23585.10 92.13 0.00 0.00 0.00 0.00 0.00 00:39:00.764 00:39:00.764 00:39:00.764 Latency(us) 00:39:00.764 [2024-12-13T11:44:28.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.764 Nvme0n1 : 10.00 23591.92 92.16 0.00 0.00 5422.63 1934.87 26838.55 00:39:00.764 [2024-12-13T11:44:28.464Z] =================================================================================================================== 00:39:00.764 [2024-12-13T11:44:28.464Z] Total : 23591.92 92.16 0.00 0.00 5422.63 1934.87 26838.55 00:39:00.764 { 00:39:00.764 "results": [ 00:39:00.764 { 00:39:00.764 "job": "Nvme0n1", 00:39:00.764 "core_mask": "0x2", 00:39:00.764 "workload": "randwrite", 00:39:00.764 "status": "finished", 00:39:00.764 "queue_depth": 128, 00:39:00.764 "io_size": 4096, 00:39:00.764 "runtime": 10.002533, 00:39:00.764 "iops": 23591.924165608852, 00:39:00.764 "mibps": 92.15595377190958, 00:39:00.764 "io_failed": 0, 00:39:00.764 "io_timeout": 0, 00:39:00.764 "avg_latency_us": 5422.629317645094, 00:39:00.764 "min_latency_us": 1934.872380952381, 00:39:00.764 "max_latency_us": 26838.55238095238 00:39:00.764 } 00:39:00.764 ], 00:39:00.765 "core_count": 1 00:39:00.765 } 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 575565 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 575565 ']' 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 575565 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 575565 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 575565' 00:39:00.765 killing process with pid 575565 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 575565 00:39:00.765 Received shutdown signal, test time was about 10.000000 seconds 00:39:00.765 00:39:00.765 Latency(us) 00:39:00.765 [2024-12-13T11:44:28.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.765 [2024-12-13T11:44:28.465Z] =================================================================================================================== 00:39:00.765 [2024-12-13T11:44:28.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:00.765 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 575565 00:39:01.024 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:01.282 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:01.541 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:01.541 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:01.541 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:01.541 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:01.541 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:01.800 [2024-12-13 12:44:29.369819] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:01.800 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:02.058 request: 00:39:02.058 { 00:39:02.058 "uuid": "8f774135-bd4a-4a89-a403-7cc4a1dec13b", 00:39:02.058 "method": "bdev_lvol_get_lvstores", 00:39:02.058 "req_id": 1 00:39:02.058 } 00:39:02.058 Got JSON-RPC error response 00:39:02.058 response: 00:39:02.058 { 00:39:02.058 "code": -19, 00:39:02.058 "message": "No such device" 00:39:02.058 } 00:39:02.058 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:02.058 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:02.058 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:02.058 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:02.058 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:02.317 aio_bdev 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c961134-8d12-4f75-b26c-7198bd6ddf5d 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0c961134-8d12-4f75-b26c-7198bd6ddf5d 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:02.317 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c961134-8d12-4f75-b26c-7198bd6ddf5d -t 2000 00:39:02.576 [ 00:39:02.576 { 00:39:02.576 "name": "0c961134-8d12-4f75-b26c-7198bd6ddf5d", 00:39:02.576 "aliases": [ 00:39:02.576 "lvs/lvol" 00:39:02.576 ], 00:39:02.576 "product_name": "Logical Volume", 00:39:02.576 "block_size": 4096, 00:39:02.576 "num_blocks": 38912, 00:39:02.576 "uuid": "0c961134-8d12-4f75-b26c-7198bd6ddf5d", 00:39:02.576 "assigned_rate_limits": { 00:39:02.576 "rw_ios_per_sec": 0, 00:39:02.576 "rw_mbytes_per_sec": 0, 00:39:02.576 "r_mbytes_per_sec": 0, 00:39:02.576 "w_mbytes_per_sec": 0 00:39:02.576 }, 00:39:02.576 "claimed": false, 00:39:02.576 "zoned": false, 00:39:02.576 "supported_io_types": { 00:39:02.576 "read": true, 00:39:02.576 "write": true, 00:39:02.576 "unmap": true, 00:39:02.576 "flush": false, 00:39:02.576 "reset": true, 00:39:02.576 "nvme_admin": false, 00:39:02.576 "nvme_io": false, 00:39:02.576 "nvme_io_md": false, 00:39:02.576 "write_zeroes": true, 00:39:02.576 "zcopy": false, 00:39:02.576 "get_zone_info": false, 00:39:02.576 "zone_management": false, 00:39:02.576 "zone_append": false, 00:39:02.576 "compare": false, 00:39:02.576 "compare_and_write": false, 00:39:02.576 "abort": false, 00:39:02.576 "seek_hole": true, 00:39:02.576 "seek_data": true, 00:39:02.576 "copy": false, 00:39:02.576 "nvme_iov_md": false 00:39:02.576 }, 00:39:02.576 "driver_specific": { 00:39:02.576 "lvol": { 00:39:02.576 "lvol_store_uuid": "8f774135-bd4a-4a89-a403-7cc4a1dec13b", 00:39:02.576 "base_bdev": "aio_bdev", 00:39:02.576 "thin_provision": false, 00:39:02.576 "num_allocated_clusters": 38, 00:39:02.576 "snapshot": false, 00:39:02.576 "clone": false, 00:39:02.576 "esnap_clone": false 00:39:02.576 } 00:39:02.576 } 00:39:02.576 } 00:39:02.576 ] 00:39:02.576 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:02.576 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:02.576 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:02.835 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:02.835 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:02.835 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:03.094 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:03.094 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c961134-8d12-4f75-b26c-7198bd6ddf5d 00:39:03.094 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f774135-bd4a-4a89-a403-7cc4a1dec13b 00:39:03.352 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:03.611 00:39:03.611 real 0m15.480s 00:39:03.611 user 0m15.004s 00:39:03.611 sys 0m1.468s 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:03.611 ************************************ 00:39:03.611 END TEST lvs_grow_clean 00:39:03.611 ************************************ 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:03.611 ************************************ 00:39:03.611 START TEST lvs_grow_dirty 00:39:03.611 ************************************ 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:03.611 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:03.869 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:03.869 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:04.128 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:04.128 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:04.128 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:04.386 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:04.386 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:04.386 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb lvol 150 00:39:04.386 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:04.386 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:04.386 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:04.645 [2024-12-13 12:44:32.233730] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:04.645 [2024-12-13 12:44:32.233881] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:04.645 true 00:39:04.645 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:04.645 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:04.903 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:04.903 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:05.162 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:05.162 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:05.421 [2024-12-13 12:44:33.006163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:05.421 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=577971 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 577971 /var/tmp/bdevperf.sock 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 577971 ']' 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:05.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.680 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:05.680 [2024-12-13 12:44:33.238401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:05.680 [2024-12-13 12:44:33.238452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577971 ] 00:39:05.680 [2024-12-13 12:44:33.310425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.680 [2024-12-13 12:44:33.332202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:05.939 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:05.939 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:05.939 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:06.198 Nvme0n1 00:39:06.198 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:06.457 [ 00:39:06.457 { 00:39:06.457 "name": "Nvme0n1", 00:39:06.457 "aliases": [ 00:39:06.457 "d05db2a6-0bba-4f24-a7b9-9b94c64f1e80" 00:39:06.457 ], 00:39:06.457 "product_name": "NVMe disk", 00:39:06.457 "block_size": 4096, 00:39:06.457 "num_blocks": 38912, 00:39:06.457 "uuid": "d05db2a6-0bba-4f24-a7b9-9b94c64f1e80", 00:39:06.457 "numa_id": 1, 00:39:06.457 "assigned_rate_limits": { 00:39:06.457 "rw_ios_per_sec": 0, 00:39:06.457 "rw_mbytes_per_sec": 0, 00:39:06.457 "r_mbytes_per_sec": 0, 00:39:06.457 "w_mbytes_per_sec": 0 00:39:06.457 }, 00:39:06.457 "claimed": false, 00:39:06.457 "zoned": false, 00:39:06.457 "supported_io_types": { 00:39:06.457 "read": true, 00:39:06.457 "write": true, 00:39:06.457 "unmap": true, 00:39:06.457 "flush": true, 00:39:06.457 "reset": true, 00:39:06.457 "nvme_admin": true, 00:39:06.457 "nvme_io": true, 00:39:06.457 "nvme_io_md": false, 00:39:06.457 "write_zeroes": true, 00:39:06.457 "zcopy": false, 00:39:06.457 "get_zone_info": false, 00:39:06.457 "zone_management": false, 00:39:06.457 "zone_append": false, 00:39:06.457 "compare": true, 00:39:06.457 "compare_and_write": true, 00:39:06.457 "abort": true, 00:39:06.457 "seek_hole": false, 00:39:06.457 "seek_data": false, 00:39:06.457 "copy": true, 00:39:06.457 "nvme_iov_md": false 00:39:06.457 }, 00:39:06.457 "memory_domains": [ 00:39:06.457 { 00:39:06.457 "dma_device_id": "system", 00:39:06.457 "dma_device_type": 1 00:39:06.457 } 00:39:06.457 ], 00:39:06.457 "driver_specific": { 00:39:06.457 "nvme": [ 00:39:06.457 { 00:39:06.457 "trid": { 00:39:06.457 "trtype": "TCP", 00:39:06.457 "adrfam": "IPv4", 00:39:06.457 "traddr": "10.0.0.2", 00:39:06.457 "trsvcid": "4420", 00:39:06.457 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:06.457 }, 00:39:06.457 "ctrlr_data": { 00:39:06.457 "cntlid": 1, 00:39:06.457 "vendor_id": "0x8086", 00:39:06.457 "model_number": "SPDK bdev Controller", 00:39:06.457 "serial_number": "SPDK0", 00:39:06.457 "firmware_revision": "25.01", 00:39:06.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.457 "oacs": { 00:39:06.457 "security": 0, 00:39:06.457 "format": 0, 00:39:06.457 "firmware": 0, 00:39:06.457 "ns_manage": 0 00:39:06.457 }, 00:39:06.457 "multi_ctrlr": true, 00:39:06.457 "ana_reporting": false 00:39:06.457 }, 00:39:06.457 "vs": { 00:39:06.457 "nvme_version": "1.3" 00:39:06.457 }, 00:39:06.457 "ns_data": { 00:39:06.457 "id": 1, 00:39:06.457 "can_share": true 00:39:06.457 } 00:39:06.457 } 00:39:06.457 ], 00:39:06.457 "mp_policy": "active_passive" 00:39:06.457 } 00:39:06.457 } 00:39:06.457 ] 00:39:06.457 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:06.457 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=578192 00:39:06.457 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:06.457 Running I/O for 10 seconds... 00:39:07.394 Latency(us) 00:39:07.394 [2024-12-13T11:44:35.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:07.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:07.394 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:39:07.394 [2024-12-13T11:44:35.094Z] =================================================================================================================== 00:39:07.394 [2024-12-13T11:44:35.094Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:39:07.394 00:39:08.330 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:08.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.588 Nvme0n1 : 2.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:39:08.588 [2024-12-13T11:44:36.288Z] =================================================================================================================== 00:39:08.588 [2024-12-13T11:44:36.288Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:39:08.588 00:39:08.588 true 00:39:08.588 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:08.588 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:08.847 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:08.847 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:08.847 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 578192 00:39:09.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:09.415 Nvme0n1 : 3.00 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:39:09.415 [2024-12-13T11:44:37.115Z] =================================================================================================================== 00:39:09.415 [2024-12-13T11:44:37.115Z] Total : 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:39:09.415 00:39:10.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:10.352 Nvme0n1 : 4.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:39:10.352 [2024-12-13T11:44:38.052Z] =================================================================================================================== 00:39:10.352 [2024-12-13T11:44:38.052Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:39:10.352 00:39:11.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:11.730 Nvme0n1 : 5.00 23444.20 91.58 0.00 0.00 0.00 0.00 0.00 00:39:11.730 [2024-12-13T11:44:39.430Z] =================================================================================================================== 00:39:11.730 [2024-12-13T11:44:39.430Z] Total : 23444.20 91.58 0.00 0.00 0.00 0.00 0.00 00:39:11.730 00:39:12.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:12.666 Nvme0n1 : 6.00 23484.50 91.74 0.00 0.00 0.00 0.00 0.00 00:39:12.666 [2024-12-13T11:44:40.366Z] =================================================================================================================== 00:39:12.666 [2024-12-13T11:44:40.366Z] Total : 23484.50 91.74 0.00 0.00 0.00 0.00 0.00 00:39:12.666 00:39:13.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:13.603 Nvme0n1 : 7.00 23518.00 91.87 0.00 0.00 0.00 0.00 0.00 00:39:13.603 [2024-12-13T11:44:41.303Z] =================================================================================================================== 00:39:13.603 [2024-12-13T11:44:41.303Z] Total : 23518.00 91.87 0.00 0.00 0.00 0.00 0.00 00:39:13.603 00:39:14.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:14.539 Nvme0n1 : 8.00 23546.88 91.98 0.00 0.00 0.00 0.00 0.00 00:39:14.539 [2024-12-13T11:44:42.239Z] =================================================================================================================== 00:39:14.539 [2024-12-13T11:44:42.239Z] Total : 23546.88 91.98 0.00 0.00 0.00 0.00 0.00 00:39:14.539 00:39:15.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:15.476 Nvme0n1 : 9.00 23569.33 92.07 0.00 0.00 0.00 0.00 0.00 00:39:15.476 [2024-12-13T11:44:43.176Z] =================================================================================================================== 00:39:15.476 [2024-12-13T11:44:43.176Z] Total : 23569.33 92.07 0.00 0.00 0.00 0.00 0.00 00:39:15.476 00:39:16.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.412 Nvme0n1 : 10.00 23593.70 92.16 0.00 0.00 0.00 0.00 0.00 00:39:16.413 [2024-12-13T11:44:44.113Z] =================================================================================================================== 00:39:16.413 [2024-12-13T11:44:44.113Z] Total : 23593.70 92.16 0.00 0.00 0.00 0.00 0.00 00:39:16.413 00:39:16.413 00:39:16.413 Latency(us) 00:39:16.413 [2024-12-13T11:44:44.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.413 Nvme0n1 : 10.00 23591.04 92.15 0.00 0.00 5422.47 3276.80 26339.23 00:39:16.413 [2024-12-13T11:44:44.113Z] =================================================================================================================== 00:39:16.413 [2024-12-13T11:44:44.113Z] Total : 23591.04 92.15 0.00 0.00 5422.47 3276.80 26339.23 00:39:16.413 { 00:39:16.413 "results": [ 00:39:16.413 { 00:39:16.413 "job": "Nvme0n1", 00:39:16.413 "core_mask": "0x2", 00:39:16.413 "workload": "randwrite", 00:39:16.413 "status": "finished", 00:39:16.413 "queue_depth": 128, 00:39:16.413 "io_size": 4096, 00:39:16.413 "runtime": 10.003842, 00:39:16.413 "iops": 23591.03632384438, 00:39:16.413 "mibps": 92.1524856400171, 00:39:16.413 "io_failed": 0, 00:39:16.413 "io_timeout": 0, 00:39:16.413 "avg_latency_us": 5422.467615209864, 00:39:16.413 "min_latency_us": 3276.8, 00:39:16.413 "max_latency_us": 26339.230476190478 00:39:16.413 } 00:39:16.413 ], 00:39:16.413 "core_count": 1 00:39:16.413 } 00:39:16.413 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 577971 00:39:16.413 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 577971 ']' 00:39:16.413 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 577971 00:39:16.413 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:16.413 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:16.413 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577971 00:39:16.671 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:16.671 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:16.671 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577971' 00:39:16.671 killing process with pid 577971 00:39:16.671 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 577971 00:39:16.671 Received shutdown signal, test time was about 10.000000 seconds 00:39:16.671 00:39:16.671 Latency(us) 00:39:16.672 [2024-12-13T11:44:44.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.672 [2024-12-13T11:44:44.372Z] =================================================================================================================== 00:39:16.672 [2024-12-13T11:44:44.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:16.672 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 577971 00:39:16.672 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:16.930 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:17.189 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:17.189 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:17.189 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:17.189 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:17.189 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 575190 00:39:17.189 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 575190 00:39:17.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 575190 Killed "${NVMF_APP[@]}" "$@" 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=579847 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 579847 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 579847 ']' 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:17.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:17.455 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:17.455 [2024-12-13 12:44:44.958255] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:17.455 [2024-12-13 12:44:44.959139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:17.455 [2024-12-13 12:44:44.959175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:17.455 [2024-12-13 12:44:45.037249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.455 [2024-12-13 12:44:45.058790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.455 [2024-12-13 12:44:45.058827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.455 [2024-12-13 12:44:45.058834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.455 [2024-12-13 12:44:45.058841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.455 [2024-12-13 12:44:45.058847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.455 [2024-12-13 12:44:45.059353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.455 [2024-12-13 12:44:45.121752] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:17.455 [2024-12-13 12:44:45.121953] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:17.455 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.455 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:17.455 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:17.455 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:17.455 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:17.715 [2024-12-13 12:44:45.356632] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:17.715 [2024-12-13 12:44:45.356840] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:17.715 [2024-12-13 12:44:45.356924] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:17.715 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:17.973 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 -t 2000 00:39:18.232 [ 00:39:18.232 { 00:39:18.232 "name": "d05db2a6-0bba-4f24-a7b9-9b94c64f1e80", 00:39:18.232 "aliases": [ 00:39:18.232 "lvs/lvol" 00:39:18.232 ], 00:39:18.232 "product_name": "Logical Volume", 00:39:18.232 "block_size": 4096, 00:39:18.232 "num_blocks": 38912, 00:39:18.232 "uuid": "d05db2a6-0bba-4f24-a7b9-9b94c64f1e80", 00:39:18.232 "assigned_rate_limits": { 00:39:18.232 "rw_ios_per_sec": 0, 00:39:18.232 "rw_mbytes_per_sec": 0, 00:39:18.232 "r_mbytes_per_sec": 0, 00:39:18.232 "w_mbytes_per_sec": 0 00:39:18.232 }, 00:39:18.232 "claimed": false, 00:39:18.232 "zoned": false, 00:39:18.232 "supported_io_types": { 00:39:18.232 "read": true, 00:39:18.232 "write": true, 00:39:18.232 "unmap": true, 00:39:18.232 "flush": false, 00:39:18.232 "reset": true, 00:39:18.232 "nvme_admin": false, 00:39:18.232 "nvme_io": false, 00:39:18.232 "nvme_io_md": false, 00:39:18.232 "write_zeroes": true, 00:39:18.232 "zcopy": false, 00:39:18.232 "get_zone_info": false, 00:39:18.232 "zone_management": false, 00:39:18.232 "zone_append": false, 00:39:18.232 "compare": false, 00:39:18.232 "compare_and_write": false, 00:39:18.232 "abort": false, 00:39:18.232 "seek_hole": true, 00:39:18.232 "seek_data": true, 00:39:18.232 "copy": false, 00:39:18.232 "nvme_iov_md": false 00:39:18.232 }, 00:39:18.232 "driver_specific": { 00:39:18.232 "lvol": { 00:39:18.232 "lvol_store_uuid": "3ae6e684-3eaf-43bb-b625-07f9eef1e6fb", 00:39:18.232 "base_bdev": "aio_bdev", 00:39:18.232 "thin_provision": false, 00:39:18.232 "num_allocated_clusters": 38, 00:39:18.232 "snapshot": false, 00:39:18.232 "clone": false, 00:39:18.232 "esnap_clone": false 00:39:18.232 } 00:39:18.232 } 00:39:18.232 } 00:39:18.232 ] 00:39:18.232 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:18.232 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:18.232 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:18.491 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:18.491 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:18.491 12:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:18.491 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:18.491 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:18.750 [2024-12-13 12:44:46.307815] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:18.750 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:19.009 request: 00:39:19.009 { 00:39:19.009 "uuid": "3ae6e684-3eaf-43bb-b625-07f9eef1e6fb", 00:39:19.009 "method": "bdev_lvol_get_lvstores", 00:39:19.009 "req_id": 1 00:39:19.009 } 00:39:19.009 Got JSON-RPC error response 00:39:19.009 response: 00:39:19.009 { 00:39:19.009 "code": -19, 00:39:19.009 "message": "No such device" 00:39:19.009 } 00:39:19.009 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:19.009 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:19.009 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:19.009 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:19.009 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:19.268 aio_bdev 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:19.268 12:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 -t 2000 00:39:19.527 [ 00:39:19.527 { 00:39:19.527 "name": "d05db2a6-0bba-4f24-a7b9-9b94c64f1e80", 00:39:19.527 "aliases": [ 00:39:19.527 "lvs/lvol" 00:39:19.527 ], 00:39:19.527 "product_name": "Logical Volume", 00:39:19.527 "block_size": 4096, 00:39:19.527 "num_blocks": 38912, 00:39:19.527 "uuid": "d05db2a6-0bba-4f24-a7b9-9b94c64f1e80", 00:39:19.527 "assigned_rate_limits": { 00:39:19.527 "rw_ios_per_sec": 0, 00:39:19.527 "rw_mbytes_per_sec": 0, 00:39:19.527 "r_mbytes_per_sec": 0, 00:39:19.527 "w_mbytes_per_sec": 0 00:39:19.527 }, 00:39:19.527 "claimed": false, 00:39:19.527 "zoned": false, 00:39:19.527 "supported_io_types": { 00:39:19.527 "read": true, 00:39:19.527 "write": true, 00:39:19.527 "unmap": true, 00:39:19.527 "flush": false, 00:39:19.527 "reset": true, 00:39:19.527 "nvme_admin": false, 00:39:19.527 "nvme_io": false, 00:39:19.527 "nvme_io_md": false, 00:39:19.527 "write_zeroes": true, 00:39:19.527 "zcopy": false, 00:39:19.527 "get_zone_info": false, 00:39:19.527 "zone_management": false, 00:39:19.527 "zone_append": false, 00:39:19.527 "compare": false, 00:39:19.527 "compare_and_write": false, 00:39:19.527 "abort": false, 00:39:19.527 "seek_hole": true, 00:39:19.527 "seek_data": true, 00:39:19.527 "copy": false, 00:39:19.527 "nvme_iov_md": false 00:39:19.527 }, 00:39:19.527 "driver_specific": { 00:39:19.527 "lvol": { 00:39:19.527 "lvol_store_uuid": "3ae6e684-3eaf-43bb-b625-07f9eef1e6fb", 00:39:19.527 "base_bdev": "aio_bdev", 00:39:19.527 "thin_provision": false, 00:39:19.527 "num_allocated_clusters": 38, 00:39:19.527 "snapshot": false, 00:39:19.527 "clone": false, 00:39:19.527 "esnap_clone": false 00:39:19.527 } 00:39:19.527 } 00:39:19.527 } 00:39:19.527 ] 00:39:19.527 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:19.527 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:19.527 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:19.786 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:19.786 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:19.786 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:20.045 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:20.045 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d05db2a6-0bba-4f24-a7b9-9b94c64f1e80 00:39:20.045 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ae6e684-3eaf-43bb-b625-07f9eef1e6fb 00:39:20.304 12:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:20.563 00:39:20.563 real 0m16.882s 00:39:20.563 user 0m34.230s 00:39:20.563 sys 0m3.921s 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:20.563 ************************************ 00:39:20.563 END TEST lvs_grow_dirty 00:39:20.563 ************************************ 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:20.563 nvmf_trace.0 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:20.563 rmmod nvme_tcp 00:39:20.563 rmmod nvme_fabrics 00:39:20.563 rmmod nvme_keyring 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 579847 ']' 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 579847 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 579847 ']' 00:39:20.563 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 579847 00:39:20.822 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:20.822 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579847 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579847' 00:39:20.823 killing process with pid 579847 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 579847 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 579847 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.823 12:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:23.359 00:39:23.359 real 0m41.450s 00:39:23.359 user 0m51.675s 00:39:23.359 sys 0m10.215s 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:23.359 ************************************ 00:39:23.359 END TEST nvmf_lvs_grow 00:39:23.359 ************************************ 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:23.359 ************************************ 00:39:23.359 START TEST nvmf_bdev_io_wait 00:39:23.359 ************************************ 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:23.359 * Looking for test storage... 00:39:23.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:23.359 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:23.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.360 --rc genhtml_branch_coverage=1 00:39:23.360 --rc genhtml_function_coverage=1 00:39:23.360 --rc genhtml_legend=1 00:39:23.360 --rc geninfo_all_blocks=1 00:39:23.360 --rc geninfo_unexecuted_blocks=1 00:39:23.360 00:39:23.360 ' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:23.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.360 --rc genhtml_branch_coverage=1 00:39:23.360 --rc genhtml_function_coverage=1 00:39:23.360 --rc genhtml_legend=1 00:39:23.360 --rc geninfo_all_blocks=1 00:39:23.360 --rc geninfo_unexecuted_blocks=1 00:39:23.360 00:39:23.360 ' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:23.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.360 --rc genhtml_branch_coverage=1 00:39:23.360 --rc genhtml_function_coverage=1 00:39:23.360 --rc genhtml_legend=1 00:39:23.360 --rc geninfo_all_blocks=1 00:39:23.360 --rc geninfo_unexecuted_blocks=1 00:39:23.360 00:39:23.360 ' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:23.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.360 --rc genhtml_branch_coverage=1 00:39:23.360 --rc genhtml_function_coverage=1 00:39:23.360 --rc genhtml_legend=1 00:39:23.360 --rc geninfo_all_blocks=1 00:39:23.360 --rc geninfo_unexecuted_blocks=1 00:39:23.360 00:39:23.360 ' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:23.360 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:29.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:29.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.998 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:29.999 Found net devices under 0000:af:00.0: cvl_0_0 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:29.999 Found net devices under 0000:af:00.1: cvl_0_1 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:39:29.999 00:39:29.999 --- 10.0.0.2 ping statistics --- 00:39:29.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.999 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:39:29.999 00:39:29.999 --- 10.0.0.1 ping statistics --- 00:39:29.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.999 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=583948 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 583948 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 583948 ']' 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:29.999 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.999 [2024-12-13 12:44:56.867691] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:29.999 [2024-12-13 12:44:56.868581] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:29.999 [2024-12-13 12:44:56.868613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:29.999 [2024-12-13 12:44:56.942265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:29.999 [2024-12-13 12:44:56.966205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.999 [2024-12-13 12:44:56.966243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.999 [2024-12-13 12:44:56.966249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.999 [2024-12-13 12:44:56.966256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.999 [2024-12-13 12:44:56.966261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.999 [2024-12-13 12:44:56.967542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:29.999 [2024-12-13 12:44:56.967649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:29.999 [2024-12-13 12:44:56.967735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.999 [2024-12-13 12:44:56.967736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:29.999 [2024-12-13 12:44:56.968121] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:29.999 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:29.999 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:29.999 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 [2024-12-13 12:44:57.110016] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:30.000 [2024-12-13 12:44:57.110619] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:30.000 [2024-12-13 12:44:57.110970] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:30.000 [2024-12-13 12:44:57.111084] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 [2024-12-13 12:44:57.116505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 Malloc0 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.000 [2024-12-13 12:44:57.184795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=583980 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=583982 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:30.000 { 00:39:30.000 "params": { 00:39:30.000 "name": "Nvme$subsystem", 00:39:30.000 "trtype": "$TEST_TRANSPORT", 00:39:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:30.000 "adrfam": "ipv4", 00:39:30.000 "trsvcid": "$NVMF_PORT", 00:39:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:30.000 "hdgst": ${hdgst:-false}, 00:39:30.000 "ddgst": ${ddgst:-false} 00:39:30.000 }, 00:39:30.000 "method": "bdev_nvme_attach_controller" 00:39:30.000 } 00:39:30.000 EOF 00:39:30.000 )") 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=583984 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:30.000 { 00:39:30.000 "params": { 00:39:30.000 "name": "Nvme$subsystem", 00:39:30.000 "trtype": "$TEST_TRANSPORT", 00:39:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:30.000 "adrfam": "ipv4", 00:39:30.000 "trsvcid": "$NVMF_PORT", 00:39:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:30.000 "hdgst": ${hdgst:-false}, 00:39:30.000 "ddgst": ${ddgst:-false} 00:39:30.000 }, 00:39:30.000 "method": "bdev_nvme_attach_controller" 00:39:30.000 } 00:39:30.000 EOF 00:39:30.000 )") 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=583987 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:30.000 { 00:39:30.000 "params": { 00:39:30.000 "name": "Nvme$subsystem", 00:39:30.000 "trtype": "$TEST_TRANSPORT", 00:39:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:30.000 "adrfam": "ipv4", 00:39:30.000 "trsvcid": "$NVMF_PORT", 00:39:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:30.000 "hdgst": ${hdgst:-false}, 00:39:30.000 "ddgst": ${ddgst:-false} 00:39:30.000 }, 00:39:30.000 "method": "bdev_nvme_attach_controller" 00:39:30.000 } 00:39:30.000 EOF 00:39:30.000 )") 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:30.000 { 00:39:30.000 "params": { 00:39:30.000 "name": "Nvme$subsystem", 00:39:30.000 "trtype": "$TEST_TRANSPORT", 00:39:30.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:30.000 "adrfam": "ipv4", 00:39:30.000 "trsvcid": "$NVMF_PORT", 00:39:30.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:30.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:30.000 "hdgst": ${hdgst:-false}, 00:39:30.000 "ddgst": ${ddgst:-false} 00:39:30.000 }, 00:39:30.000 "method": "bdev_nvme_attach_controller" 00:39:30.000 } 00:39:30.000 EOF 00:39:30.000 )") 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 583980 00:39:30.000 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:30.001 "params": { 00:39:30.001 "name": "Nvme1", 00:39:30.001 "trtype": "tcp", 00:39:30.001 "traddr": "10.0.0.2", 00:39:30.001 "adrfam": "ipv4", 00:39:30.001 "trsvcid": "4420", 00:39:30.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:30.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:30.001 "hdgst": false, 00:39:30.001 "ddgst": false 00:39:30.001 }, 00:39:30.001 "method": "bdev_nvme_attach_controller" 00:39:30.001 }' 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:30.001 "params": { 00:39:30.001 "name": "Nvme1", 00:39:30.001 "trtype": "tcp", 00:39:30.001 "traddr": "10.0.0.2", 00:39:30.001 "adrfam": "ipv4", 00:39:30.001 "trsvcid": "4420", 00:39:30.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:30.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:30.001 "hdgst": false, 00:39:30.001 "ddgst": false 00:39:30.001 }, 00:39:30.001 "method": "bdev_nvme_attach_controller" 00:39:30.001 }' 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:30.001 "params": { 00:39:30.001 "name": "Nvme1", 00:39:30.001 "trtype": "tcp", 00:39:30.001 "traddr": "10.0.0.2", 00:39:30.001 "adrfam": "ipv4", 00:39:30.001 "trsvcid": "4420", 00:39:30.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:30.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:30.001 "hdgst": false, 00:39:30.001 "ddgst": false 00:39:30.001 }, 00:39:30.001 "method": "bdev_nvme_attach_controller" 00:39:30.001 }' 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:30.001 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:30.001 "params": { 00:39:30.001 "name": "Nvme1", 00:39:30.001 "trtype": "tcp", 00:39:30.001 "traddr": "10.0.0.2", 00:39:30.001 "adrfam": "ipv4", 00:39:30.001 "trsvcid": "4420", 00:39:30.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:30.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:30.001 "hdgst": false, 00:39:30.001 "ddgst": false 00:39:30.001 }, 00:39:30.001 "method": "bdev_nvme_attach_controller" 00:39:30.001 }' 00:39:30.001 [2024-12-13 12:44:57.238009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:30.001 [2024-12-13 12:44:57.238010] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:30.001 [2024-12-13 12:44:57.238012] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:30.001 [2024-12-13 12:44:57.238063] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-12-13 12:44:57.238064] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:39:30.001 [2024-12-13 12:44:57.238064] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:30.001 --proc-type=auto ] 00:39:30.001 [2024-12-13 12:44:57.241267] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:30.001 [2024-12-13 12:44:57.241315] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:30.001 [2024-12-13 12:44:57.410750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.001 [2024-12-13 12:44:57.427985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:30.001 [2024-12-13 12:44:57.513897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.001 [2024-12-13 12:44:57.530979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:30.001 [2024-12-13 12:44:57.608968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.001 [2024-12-13 12:44:57.632666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:30.001 [2024-12-13 12:44:57.669307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.001 [2024-12-13 12:44:57.685352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:30.301 Running I/O for 1 seconds... 00:39:30.301 Running I/O for 1 seconds... 00:39:30.301 Running I/O for 1 seconds... 00:39:30.301 Running I/O for 1 seconds... 00:39:31.237 7852.00 IOPS, 30.67 MiB/s 00:39:31.237 Latency(us) 00:39:31.237 [2024-12-13T11:44:58.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.238 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:31.238 Nvme1n1 : 1.02 7872.21 30.75 0.00 0.00 16134.38 3167.57 22968.81 00:39:31.238 [2024-12-13T11:44:58.938Z] =================================================================================================================== 00:39:31.238 [2024-12-13T11:44:58.938Z] Total : 7872.21 30.75 0.00 0.00 16134.38 3167.57 22968.81 00:39:31.238 12100.00 IOPS, 47.27 MiB/s [2024-12-13T11:44:58.938Z] 7299.00 IOPS, 28.51 MiB/s 00:39:31.238 Latency(us) 00:39:31.238 [2024-12-13T11:44:58.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.238 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:31.238 Nvme1n1 : 1.01 12158.27 47.49 0.00 0.00 10492.20 1497.97 15291.73 00:39:31.238 [2024-12-13T11:44:58.938Z] =================================================================================================================== 00:39:31.238 [2024-12-13T11:44:58.938Z] Total : 12158.27 47.49 0.00 0.00 10492.20 1497.97 15291.73 00:39:31.238 00:39:31.238 Latency(us) 00:39:31.238 [2024-12-13T11:44:58.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.238 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:31.238 Nvme1n1 : 1.05 7104.08 27.75 0.00 0.00 17297.50 4899.60 45188.63 00:39:31.238 [2024-12-13T11:44:58.938Z] =================================================================================================================== 00:39:31.238 [2024-12-13T11:44:58.938Z] Total : 7104.08 27.75 0.00 0.00 17297.50 4899.60 45188.63 00:39:31.238 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 583982 00:39:31.497 242656.00 IOPS, 947.88 MiB/s 00:39:31.497 Latency(us) 00:39:31.497 [2024-12-13T11:44:59.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.497 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:31.497 Nvme1n1 : 1.00 242282.65 946.42 0.00 0.00 525.31 223.33 1536.98 00:39:31.497 [2024-12-13T11:44:59.197Z] =================================================================================================================== 00:39:31.497 [2024-12-13T11:44:59.197Z] Total : 242282.65 946.42 0.00 0.00 525.31 223.33 1536.98 00:39:31.497 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 583984 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 583987 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.497 rmmod nvme_tcp 00:39:31.497 rmmod nvme_fabrics 00:39:31.497 rmmod nvme_keyring 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 583948 ']' 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 583948 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 583948 ']' 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 583948 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.497 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 583948 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 583948' 00:39:31.756 killing process with pid 583948 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 583948 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 583948 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.756 12:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.295 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:34.295 00:39:34.295 real 0m10.760s 00:39:34.295 user 0m14.880s 00:39:34.295 sys 0m6.287s 00:39:34.295 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.295 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:34.295 ************************************ 00:39:34.295 END TEST nvmf_bdev_io_wait 00:39:34.295 ************************************ 00:39:34.295 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:34.296 ************************************ 00:39:34.296 START TEST nvmf_queue_depth 00:39:34.296 ************************************ 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:34.296 * Looking for test storage... 00:39:34.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:34.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.296 --rc genhtml_branch_coverage=1 00:39:34.296 --rc genhtml_function_coverage=1 00:39:34.296 --rc genhtml_legend=1 00:39:34.296 --rc geninfo_all_blocks=1 00:39:34.296 --rc geninfo_unexecuted_blocks=1 00:39:34.296 00:39:34.296 ' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:34.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.296 --rc genhtml_branch_coverage=1 00:39:34.296 --rc genhtml_function_coverage=1 00:39:34.296 --rc genhtml_legend=1 00:39:34.296 --rc geninfo_all_blocks=1 00:39:34.296 --rc geninfo_unexecuted_blocks=1 00:39:34.296 00:39:34.296 ' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:34.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.296 --rc genhtml_branch_coverage=1 00:39:34.296 --rc genhtml_function_coverage=1 00:39:34.296 --rc genhtml_legend=1 00:39:34.296 --rc geninfo_all_blocks=1 00:39:34.296 --rc geninfo_unexecuted_blocks=1 00:39:34.296 00:39:34.296 ' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:34.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.296 --rc genhtml_branch_coverage=1 00:39:34.296 --rc genhtml_function_coverage=1 00:39:34.296 --rc genhtml_legend=1 00:39:34.296 --rc geninfo_all_blocks=1 00:39:34.296 --rc geninfo_unexecuted_blocks=1 00:39:34.296 00:39:34.296 ' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:34.296 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:34.297 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:39.575 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:39.575 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:39.575 Found net devices under 0000:af:00.0: cvl_0_0 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:39.575 Found net devices under 0000:af:00.1: cvl_0_1 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:39.575 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.576 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:39.835 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:39.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:39:39.836 00:39:39.836 --- 10.0.0.2 ping statistics --- 00:39:39.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.836 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:39:39.836 00:39:39.836 --- 10.0.0.1 ping statistics --- 00:39:39.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.836 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:39.836 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=587824 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 587824 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 587824 ']' 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.095 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.095 [2024-12-13 12:45:07.624180] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:40.095 [2024-12-13 12:45:07.625153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.095 [2024-12-13 12:45:07.625190] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.095 [2024-12-13 12:45:07.706876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.095 [2024-12-13 12:45:07.728427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.095 [2024-12-13 12:45:07.728465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.095 [2024-12-13 12:45:07.728472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.095 [2024-12-13 12:45:07.728478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.095 [2024-12-13 12:45:07.728483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.095 [2024-12-13 12:45:07.728980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.095 [2024-12-13 12:45:07.791330] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.095 [2024-12-13 12:45:07.791533] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 [2024-12-13 12:45:07.869638] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 Malloc0 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 [2024-12-13 12:45:07.949728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=587848 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 587848 /var/tmp/bdevperf.sock 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 587848 ']' 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.355 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.355 [2024-12-13 12:45:07.997500] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.355 [2024-12-13 12:45:07.997542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587848 ] 00:39:40.614 [2024-12-13 12:45:08.071095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.614 [2024-12-13 12:45:08.093739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:40.614 NVMe0n1 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.614 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:40.873 Running I/O for 10 seconds... 00:39:42.746 12285.00 IOPS, 47.99 MiB/s [2024-12-13T11:45:11.822Z] 12293.50 IOPS, 48.02 MiB/s [2024-12-13T11:45:12.759Z] 12563.00 IOPS, 49.07 MiB/s [2024-12-13T11:45:13.696Z] 12549.50 IOPS, 49.02 MiB/s [2024-12-13T11:45:14.633Z] 12623.20 IOPS, 49.31 MiB/s [2024-12-13T11:45:15.571Z] 12651.67 IOPS, 49.42 MiB/s [2024-12-13T11:45:16.507Z] 12685.14 IOPS, 49.55 MiB/s [2024-12-13T11:45:17.443Z] 12687.75 IOPS, 49.56 MiB/s [2024-12-13T11:45:18.821Z] 12721.89 IOPS, 49.69 MiB/s [2024-12-13T11:45:18.821Z] 12717.40 IOPS, 49.68 MiB/s 00:39:51.121 Latency(us) 00:39:51.121 [2024-12-13T11:45:18.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.121 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:51.121 Verification LBA range: start 0x0 length 0x4000 00:39:51.121 NVMe0n1 : 10.05 12755.61 49.83 0.00 0.00 80014.95 10423.34 50431.51 00:39:51.121 [2024-12-13T11:45:18.821Z] =================================================================================================================== 00:39:51.121 [2024-12-13T11:45:18.821Z] Total : 12755.61 49.83 0.00 0.00 80014.95 10423.34 50431.51 00:39:51.121 { 00:39:51.121 "results": [ 00:39:51.121 { 00:39:51.121 "job": "NVMe0n1", 00:39:51.122 "core_mask": "0x1", 00:39:51.122 "workload": "verify", 00:39:51.122 "status": "finished", 00:39:51.122 "verify_range": { 00:39:51.122 "start": 0, 00:39:51.122 "length": 16384 00:39:51.122 }, 00:39:51.122 "queue_depth": 1024, 00:39:51.122 "io_size": 4096, 00:39:51.122 "runtime": 10.047191, 00:39:51.122 "iops": 12755.605024329685, 00:39:51.122 "mibps": 49.82658212628783, 00:39:51.122 "io_failed": 0, 00:39:51.122 "io_timeout": 0, 00:39:51.122 "avg_latency_us": 80014.95478590045, 00:39:51.122 "min_latency_us": 10423.344761904762, 00:39:51.122 "max_latency_us": 50431.51238095238 00:39:51.122 } 00:39:51.122 ], 00:39:51.122 "core_count": 1 00:39:51.122 } 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 587848 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 587848 ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 587848 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587848 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587848' 00:39:51.122 killing process with pid 587848 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 587848 00:39:51.122 Received shutdown signal, test time was about 10.000000 seconds 00:39:51.122 00:39:51.122 Latency(us) 00:39:51.122 [2024-12-13T11:45:18.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.122 [2024-12-13T11:45:18.822Z] =================================================================================================================== 00:39:51.122 [2024-12-13T11:45:18.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 587848 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.122 rmmod nvme_tcp 00:39:51.122 rmmod nvme_fabrics 00:39:51.122 rmmod nvme_keyring 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 587824 ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 587824 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 587824 ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 587824 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587824 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587824' 00:39:51.122 killing process with pid 587824 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 587824 00:39:51.122 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 587824 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.381 12:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:53.919 00:39:53.919 real 0m19.571s 00:39:53.919 user 0m22.668s 00:39:53.919 sys 0m6.117s 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:53.919 ************************************ 00:39:53.919 END TEST nvmf_queue_depth 00:39:53.919 ************************************ 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:53.919 ************************************ 00:39:53.919 START TEST nvmf_target_multipath 00:39:53.919 ************************************ 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:53.919 * Looking for test storage... 00:39:53.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:53.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.919 --rc genhtml_branch_coverage=1 00:39:53.919 --rc genhtml_function_coverage=1 00:39:53.919 --rc genhtml_legend=1 00:39:53.919 --rc geninfo_all_blocks=1 00:39:53.919 --rc geninfo_unexecuted_blocks=1 00:39:53.919 00:39:53.919 ' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:53.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.919 --rc genhtml_branch_coverage=1 00:39:53.919 --rc genhtml_function_coverage=1 00:39:53.919 --rc genhtml_legend=1 00:39:53.919 --rc geninfo_all_blocks=1 00:39:53.919 --rc geninfo_unexecuted_blocks=1 00:39:53.919 00:39:53.919 ' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:53.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.919 --rc genhtml_branch_coverage=1 00:39:53.919 --rc genhtml_function_coverage=1 00:39:53.919 --rc genhtml_legend=1 00:39:53.919 --rc geninfo_all_blocks=1 00:39:53.919 --rc geninfo_unexecuted_blocks=1 00:39:53.919 00:39:53.919 ' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:53.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.919 --rc genhtml_branch_coverage=1 00:39:53.919 --rc genhtml_function_coverage=1 00:39:53.919 --rc genhtml_legend=1 00:39:53.919 --rc geninfo_all_blocks=1 00:39:53.919 --rc geninfo_unexecuted_blocks=1 00:39:53.919 00:39:53.919 ' 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:53.919 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:53.920 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:00.493 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:00.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:00.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:00.494 Found net devices under 0000:af:00.0: cvl_0_0 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:00.494 Found net devices under 0000:af:00.1: cvl_0_1 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:00.494 12:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:00.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:00.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:40:00.494 00:40:00.494 --- 10.0.0.2 ping statistics --- 00:40:00.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.494 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:00.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:00.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:40:00.494 00:40:00.494 --- 10.0.0.1 ping statistics --- 00:40:00.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.494 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:00.494 only one NIC for nvmf test 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:00.494 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.495 rmmod nvme_tcp 00:40:00.495 rmmod nvme_fabrics 00:40:00.495 rmmod nvme_keyring 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.495 12:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:01.874 00:40:01.874 real 0m8.253s 00:40:01.874 user 0m1.846s 00:40:01.874 sys 0m4.408s 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:01.874 ************************************ 00:40:01.874 END TEST nvmf_target_multipath 00:40:01.874 ************************************ 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:01.874 ************************************ 00:40:01.874 START TEST nvmf_zcopy 00:40:01.874 ************************************ 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:01.874 * Looking for test storage... 00:40:01.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:40:01.874 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.134 --rc genhtml_branch_coverage=1 00:40:02.134 --rc genhtml_function_coverage=1 00:40:02.134 --rc genhtml_legend=1 00:40:02.134 --rc geninfo_all_blocks=1 00:40:02.134 --rc geninfo_unexecuted_blocks=1 00:40:02.134 00:40:02.134 ' 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.134 --rc genhtml_branch_coverage=1 00:40:02.134 --rc genhtml_function_coverage=1 00:40:02.134 --rc genhtml_legend=1 00:40:02.134 --rc geninfo_all_blocks=1 00:40:02.134 --rc geninfo_unexecuted_blocks=1 00:40:02.134 00:40:02.134 ' 00:40:02.134 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:02.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.134 --rc genhtml_branch_coverage=1 00:40:02.134 --rc genhtml_function_coverage=1 00:40:02.134 --rc genhtml_legend=1 00:40:02.135 --rc geninfo_all_blocks=1 00:40:02.135 --rc geninfo_unexecuted_blocks=1 00:40:02.135 00:40:02.135 ' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:02.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.135 --rc genhtml_branch_coverage=1 00:40:02.135 --rc genhtml_function_coverage=1 00:40:02.135 --rc genhtml_legend=1 00:40:02.135 --rc geninfo_all_blocks=1 00:40:02.135 --rc geninfo_unexecuted_blocks=1 00:40:02.135 00:40:02.135 ' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:02.135 12:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:08.707 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:08.707 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:08.707 Found net devices under 0000:af:00.0: cvl_0_0 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:08.707 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:08.708 Found net devices under 0000:af:00.1: cvl_0_1 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:08.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:08.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:40:08.708 00:40:08.708 --- 10.0.0.2 ping statistics --- 00:40:08.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.708 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:08.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:08.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:40:08.708 00:40:08.708 --- 10.0.0.1 ping statistics --- 00:40:08.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.708 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=596718 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 596718 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 596718 ']' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:08.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.708 [2024-12-13 12:45:35.579122] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:08.708 [2024-12-13 12:45:35.580002] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:08.708 [2024-12-13 12:45:35.580033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.708 [2024-12-13 12:45:35.656209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.708 [2024-12-13 12:45:35.676860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:08.708 [2024-12-13 12:45:35.676893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:08.708 [2024-12-13 12:45:35.676900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:08.708 [2024-12-13 12:45:35.676906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:08.708 [2024-12-13 12:45:35.676911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:08.708 [2024-12-13 12:45:35.677378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.708 [2024-12-13 12:45:35.739159] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:08.708 [2024-12-13 12:45:35.739351] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.708 [2024-12-13 12:45:35.810115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.708 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.709 [2024-12-13 12:45:35.838307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.709 malloc0 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:08.709 { 00:40:08.709 "params": { 00:40:08.709 "name": "Nvme$subsystem", 00:40:08.709 "trtype": "$TEST_TRANSPORT", 00:40:08.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.709 "adrfam": "ipv4", 00:40:08.709 "trsvcid": "$NVMF_PORT", 00:40:08.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.709 "hdgst": ${hdgst:-false}, 00:40:08.709 "ddgst": ${ddgst:-false} 00:40:08.709 }, 00:40:08.709 "method": "bdev_nvme_attach_controller" 00:40:08.709 } 00:40:08.709 EOF 00:40:08.709 )") 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:08.709 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:08.709 "params": { 00:40:08.709 "name": "Nvme1", 00:40:08.709 "trtype": "tcp", 00:40:08.709 "traddr": "10.0.0.2", 00:40:08.709 "adrfam": "ipv4", 00:40:08.709 "trsvcid": "4420", 00:40:08.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:08.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:08.709 "hdgst": false, 00:40:08.709 "ddgst": false 00:40:08.709 }, 00:40:08.709 "method": "bdev_nvme_attach_controller" 00:40:08.709 }' 00:40:08.709 [2024-12-13 12:45:35.931696] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:08.709 [2024-12-13 12:45:35.931737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596877 ] 00:40:08.709 [2024-12-13 12:45:36.004906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.709 [2024-12-13 12:45:36.026985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.709 Running I/O for 10 seconds... 00:40:11.024 8354.00 IOPS, 65.27 MiB/s [2024-12-13T11:45:39.661Z] 8416.00 IOPS, 65.75 MiB/s [2024-12-13T11:45:40.598Z] 8450.33 IOPS, 66.02 MiB/s [2024-12-13T11:45:41.536Z] 8438.25 IOPS, 65.92 MiB/s [2024-12-13T11:45:42.473Z] 8443.20 IOPS, 65.96 MiB/s [2024-12-13T11:45:43.410Z] 8451.17 IOPS, 66.02 MiB/s [2024-12-13T11:45:44.788Z] 8454.43 IOPS, 66.05 MiB/s [2024-12-13T11:45:45.724Z] 8459.12 IOPS, 66.09 MiB/s [2024-12-13T11:45:46.660Z] 8468.11 IOPS, 66.16 MiB/s [2024-12-13T11:45:46.660Z] 8467.70 IOPS, 66.15 MiB/s 00:40:18.960 Latency(us) 00:40:18.960 [2024-12-13T11:45:46.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.960 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:18.960 Verification LBA range: start 0x0 length 0x1000 00:40:18.960 Nvme1n1 : 10.01 8470.61 66.18 0.00 0.00 15068.57 1318.52 21595.67 00:40:18.960 [2024-12-13T11:45:46.660Z] =================================================================================================================== 00:40:18.960 [2024-12-13T11:45:46.660Z] Total : 8470.61 66.18 0.00 0.00 15068.57 1318.52 21595.67 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=598506 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:18.960 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:18.961 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:18.961 { 00:40:18.961 "params": { 00:40:18.961 "name": "Nvme$subsystem", 00:40:18.961 "trtype": "$TEST_TRANSPORT", 00:40:18.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.961 "adrfam": "ipv4", 00:40:18.961 "trsvcid": "$NVMF_PORT", 00:40:18.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.961 "hdgst": ${hdgst:-false}, 00:40:18.961 "ddgst": ${ddgst:-false} 00:40:18.961 }, 00:40:18.961 "method": "bdev_nvme_attach_controller" 00:40:18.961 } 00:40:18.961 EOF 00:40:18.961 )") 00:40:18.961 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:18.961 [2024-12-13 12:45:46.529706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.529736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:18.961 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:18.961 12:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:18.961 "params": { 00:40:18.961 "name": "Nvme1", 00:40:18.961 "trtype": "tcp", 00:40:18.961 "traddr": "10.0.0.2", 00:40:18.961 "adrfam": "ipv4", 00:40:18.961 "trsvcid": "4420", 00:40:18.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:18.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:18.961 "hdgst": false, 00:40:18.961 "ddgst": false 00:40:18.961 }, 00:40:18.961 "method": "bdev_nvme_attach_controller" 00:40:18.961 }' 00:40:18.961 [2024-12-13 12:45:46.541669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.541680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.553666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.553676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.565665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.565674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.567814] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:18.961 [2024-12-13 12:45:46.567853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598506 ] 00:40:18.961 [2024-12-13 12:45:46.577666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.577676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.589665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.589674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.601666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.601676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.613666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.613676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.625680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.625689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.637665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.637673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.961 [2024-12-13 12:45:46.642345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.961 [2024-12-13 12:45:46.649669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.961 [2024-12-13 12:45:46.649680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.661669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.661683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.664662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.220 [2024-12-13 12:45:46.673668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.673679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.685678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.685697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.697671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.697685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.709672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.709685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.721670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.721682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.733675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.733691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.745682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.745701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.757670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.757683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.769670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.769683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.781669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.781680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.793665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.793675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.805665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.805674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.817667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.817680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.829670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.829683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.841673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.841691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 Running I/O for 5 seconds... 00:40:19.220 [2024-12-13 12:45:46.859488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.859507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.874408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.874426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.220 [2024-12-13 12:45:46.889841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.220 [2024-12-13 12:45:46.889859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.221 [2024-12-13 12:45:46.902422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.221 [2024-12-13 12:45:46.902440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.221 [2024-12-13 12:45:46.915369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.221 [2024-12-13 12:45:46.915387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:46.930591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:46.930610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:46.945960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:46.945978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:46.961696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:46.961714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:46.975316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:46.975334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:46.990736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:46.990754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.005840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.005859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.018155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.018172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.031363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.031381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.045981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.045998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.058417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.058435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.071244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.071261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.086325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.086342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.101196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.101213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.115686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.115704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.131091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.131110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.145892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.145911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.158424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.158443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.480 [2024-12-13 12:45:47.173794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.480 [2024-12-13 12:45:47.173813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.186224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.186242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.198916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.198935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.209292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.209310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.223628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.223646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.238409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.238427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.253165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.253183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.266806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.266824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.277840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.277858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.291371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.291389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.306435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.306453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.321868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.321891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.333982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.333999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.347417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.347436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.362160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.362180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.377739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.377763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.390091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.390109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.403325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.403343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.417878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.417897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.740 [2024-12-13 12:45:47.430369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.740 [2024-12-13 12:45:47.430387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.443278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.443297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.458362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.458379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.474262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.474280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.489739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.489758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.502212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.502230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.518212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.518230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.533738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.533757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.546391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.546409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.561531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.561550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.575592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.575610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.590266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.590283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.605581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.605598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.619299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.619316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.634274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.634291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.649704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.649731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.662215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.999 [2024-12-13 12:45:47.662232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.999 [2024-12-13 12:45:47.675471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.000 [2024-12-13 12:45:47.675488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.000 [2024-12-13 12:45:47.690282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.000 [2024-12-13 12:45:47.690299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.705310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.705329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.719463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.719481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.734110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.734127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.750495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.750513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.766367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.766385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.781390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.781408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.794249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.794266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.809732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.809750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.823421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.823440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.838207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.838225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.854294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.854312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 16543.00 IOPS, 129.24 MiB/s [2024-12-13T11:45:47.959Z] [2024-12-13 12:45:47.869338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.869355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.883178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.883195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.898350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.898368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.913838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.913856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.925144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.925167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.939613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.939631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.259 [2024-12-13 12:45:47.954075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.259 [2024-12-13 12:45:47.954093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.518 [2024-12-13 12:45:47.969943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.518 [2024-12-13 12:45:47.969962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.518 [2024-12-13 12:45:47.981232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.518 [2024-12-13 12:45:47.981249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.518 [2024-12-13 12:45:47.995628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.518 [2024-12-13 12:45:47.995651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.010591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.010608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.025534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.025552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.037086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.037104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.051599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.051617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.066499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.066516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.081031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.081048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.095658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.095675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.110733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.110750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.125139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.125157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.139704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.139721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.154811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.154828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.169300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.169318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.183297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.183314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.198145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.198162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.519 [2024-12-13 12:45:48.213810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.519 [2024-12-13 12:45:48.213828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.227512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.227530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.242387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.242405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.258018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.258036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.269070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.269088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.283303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.283320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.298164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.298181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.313208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.313226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.326232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.326249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.339301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.339319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.353930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.353947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.365167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.365184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.379259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.379284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.394449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.394467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.409571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.409588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.422621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.422638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.435124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.435141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.450300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.450317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.778 [2024-12-13 12:45:48.465825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.778 [2024-12-13 12:45:48.465843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.478453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.478472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.493542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.493559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.507195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.507212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.522551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.522568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.537496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.537513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.550787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.550806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.566044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.566062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.581488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.581508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.594394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.594413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.609529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.609548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.623077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.623095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.637847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.637866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.649097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.649115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.663042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.663060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.677609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.677627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.690512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.690529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.705562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.705580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.719395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.719418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.037 [2024-12-13 12:45:48.734255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.037 [2024-12-13 12:45:48.734273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.296 [2024-12-13 12:45:48.749078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.296 [2024-12-13 12:45:48.749105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.296 [2024-12-13 12:45:48.762830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.296 [2024-12-13 12:45:48.762847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.296 [2024-12-13 12:45:48.777836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.296 [2024-12-13 12:45:48.777855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.296 [2024-12-13 12:45:48.790653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.296 [2024-12-13 12:45:48.790671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.805598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.805618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.818662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.818681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.833861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.833879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.845060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.845079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 16600.50 IOPS, 129.69 MiB/s [2024-12-13T11:45:48.997Z] [2024-12-13 12:45:48.859159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.859177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.873873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.873890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.885202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.885220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.899015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.899034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.913690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.913709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.925116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.925134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.939331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.939350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.953750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.953768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.964123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.964142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.979258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.979281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.297 [2024-12-13 12:45:48.993899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.297 [2024-12-13 12:45:48.993917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.006059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.006077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.019464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.019482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.034630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.034647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.050055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.050073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.065536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.065554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.079274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.079291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.094187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.094205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.109673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.109690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.123200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.123217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.138313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.138330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.153843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.153862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.164412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.164429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.179070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.179088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.193531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.193549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.205423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.205441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.219773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.219796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.234572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.234589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.556 [2024-12-13 12:45:49.250105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.556 [2024-12-13 12:45:49.250126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.265216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.265235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.278804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.278822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.293600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.293620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.304738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.304756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.319148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.319166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.333912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.333935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.344645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.344663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.359390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.359407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.373848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.373866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.385168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.385185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.399325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.399344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.414302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.414320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.430926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.430943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.445761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.445779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.456201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.456218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.815 [2024-12-13 12:45:49.471291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.815 [2024-12-13 12:45:49.471309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.816 [2024-12-13 12:45:49.486251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.816 [2024-12-13 12:45:49.486268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.816 [2024-12-13 12:45:49.502009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.816 [2024-12-13 12:45:49.502026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.517846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.517870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.531073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.531091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.546000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.546017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.561566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.561584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.574463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.574480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.589893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.589915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.600712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.600731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.615901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.615919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.631117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.631134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.645641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.645659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.656946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.656964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.671706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.671724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.686658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.686676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.701372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.701390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.715503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.715521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.730098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.730115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.744831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.744848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.075 [2024-12-13 12:45:49.759603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.075 [2024-12-13 12:45:49.759620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.774418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.774436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.789641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.789663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.800956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.800973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.815339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.815357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.830923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.830940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.845549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.845567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.858379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.858396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 16606.33 IOPS, 129.74 MiB/s [2024-12-13T11:45:50.035Z] [2024-12-13 12:45:49.873256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.873275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.884291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.884308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.899464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.899481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.914340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.914357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.929860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.929877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.942312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.942329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.955255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.955273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.970128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.970145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.985557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.985574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:49.999672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:49.999689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:50.014727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:50.014746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.335 [2024-12-13 12:45:50.030027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.335 [2024-12-13 12:45:50.030047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.594 [2024-12-13 12:45:50.046109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.046129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.061731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.061751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.073409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.073428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.087791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.087820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.103005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.103025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.118181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.118198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.133165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.133183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.147392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.147409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.162662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.162680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.173234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.173252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.187368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.187386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.202518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.202536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.217688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.217708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.230241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.230258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.243015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.243033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.257779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.257801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.268992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.269010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.595 [2024-12-13 12:45:50.283837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.595 [2024-12-13 12:45:50.283855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.299114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.299132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.313861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.313879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.325084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.325102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.339491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.339510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.354300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.354317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.369078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.369096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.383515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.383533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.398394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.398411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.413630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.413648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.424473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.424491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.439610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.439630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.454231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.454251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.469448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.469466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.482539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.482556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.497305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.497323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.510419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.510435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.525588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.854 [2024-12-13 12:45:50.525607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.854 [2024-12-13 12:45:50.539806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.855 [2024-12-13 12:45:50.539823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.554652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.554671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.569612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.569630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.582548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.582569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.597948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.597966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.610149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.610166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.623423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.623440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.638558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.638575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.653627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.653645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.667342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.667359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.682407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.682424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.696980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.696998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.710807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.710824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.725896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.725914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.736684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.736701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.751514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.751531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.765829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.765847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.777727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.777745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.791275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.791293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.114 [2024-12-13 12:45:50.806171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.114 [2024-12-13 12:45:50.806188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.821341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.821360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.835379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.835397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.850133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.850154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 16601.00 IOPS, 129.70 MiB/s [2024-12-13T11:45:51.074Z] [2024-12-13 12:45:50.865713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.865730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.879218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.879235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.894078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.894095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.909417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.909434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.922658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.922676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.937735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.937753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.950285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.950302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.965388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.965407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.979224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.979241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:50.994510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:50.994528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:51.009007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:51.009025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:51.021209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:51.021227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:51.035931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:51.035949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:51.051011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:51.051028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.374 [2024-12-13 12:45:51.065802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.374 [2024-12-13 12:45:51.065820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.079380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.079399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.093925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.093943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.105733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.105750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.119163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.119186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.133980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.133997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.149835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.149854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.163386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.163403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.177884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.177902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.188936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.188954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.203407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.203424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.218401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.218435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.233308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.233325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.247511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.247530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.633 [2024-12-13 12:45:51.262178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.633 [2024-12-13 12:45:51.262195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.634 [2024-12-13 12:45:51.274494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.634 [2024-12-13 12:45:51.274512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.634 [2024-12-13 12:45:51.289285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.634 [2024-12-13 12:45:51.289303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.634 [2024-12-13 12:45:51.300098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.634 [2024-12-13 12:45:51.300116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.634 [2024-12-13 12:45:51.314965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.634 [2024-12-13 12:45:51.314983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.634 [2024-12-13 12:45:51.329993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.634 [2024-12-13 12:45:51.330010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.345127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.345144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.360426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.360444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.374620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.374638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.389732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.389750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.401375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.401392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.415952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.415969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.431074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.431091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.445893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.445911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.457229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.457249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.471888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.471907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.486490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.486509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.501164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.501183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.515694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.515712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.530906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.530923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.545742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.545761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.558294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.558312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.573962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.573980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.893 [2024-12-13 12:45:51.586834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.893 [2024-12-13 12:45:51.586851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.597976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.597995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.611603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.611622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.626557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.626575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.641680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.641699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.655530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.655548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.670753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.670771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.685389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.152 [2024-12-13 12:45:51.685407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.152 [2024-12-13 12:45:51.699251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.699270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.713885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.713904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.726335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.726353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.741372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.741389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.752469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.752487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.767376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.767394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.782388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.782405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.797702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.797720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.811362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.811379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.826224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.826242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.153 [2024-12-13 12:45:51.841669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.153 [2024-12-13 12:45:51.841687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.412 [2024-12-13 12:45:51.855187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.412 [2024-12-13 12:45:51.855206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.412 16609.40 IOPS, 129.76 MiB/s [2024-12-13T11:45:52.112Z] [2024-12-13 12:45:51.869480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.412 [2024-12-13 12:45:51.869499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.412 00:40:24.412 Latency(us) 00:40:24.412 [2024-12-13T11:45:52.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.413 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:24.413 Nvme1n1 : 5.01 16611.72 129.78 0.00 0.00 7697.68 1997.29 12982.37 00:40:24.413 [2024-12-13T11:45:52.113Z] =================================================================================================================== 00:40:24.413 [2024-12-13T11:45:52.113Z] Total : 16611.72 129.78 0.00 0.00 7697.68 1997.29 12982.37 00:40:24.413 [2024-12-13 12:45:51.877670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.877686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.889671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.889685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.901686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.901704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.913678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.913696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.925677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.925690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.937668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.937682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.949671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.949685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.961669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.961683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.973668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.973680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.985678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.985687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:51.997672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:51.997682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:52.009666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:52.009677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 [2024-12-13 12:45:52.021667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.413 [2024-12-13 12:45:52.021677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (598506) - No such process 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 598506 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:24.413 delay0 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.413 12:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:24.672 [2024-12-13 12:45:52.164726] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:32.795 Initializing NVMe Controllers 00:40:32.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:32.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:32.795 Initialization complete. Launching workers. 00:40:32.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 288, failed: 14665 00:40:32.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14870, failed to submit 83 00:40:32.795 success 14753, unsuccessful 117, failed 0 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:32.795 rmmod nvme_tcp 00:40:32.795 rmmod nvme_fabrics 00:40:32.795 rmmod nvme_keyring 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 596718 ']' 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 596718 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 596718 ']' 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 596718 00:40:32.795 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596718 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596718' 00:40:32.796 killing process with pid 596718 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 596718 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 596718 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.796 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:34.174 00:40:34.174 real 0m32.192s 00:40:34.174 user 0m41.711s 00:40:34.174 sys 0m12.883s 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:34.174 ************************************ 00:40:34.174 END TEST nvmf_zcopy 00:40:34.174 ************************************ 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:34.174 ************************************ 00:40:34.174 START TEST nvmf_nmic 00:40:34.174 ************************************ 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:34.174 * Looking for test storage... 00:40:34.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:34.174 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:34.175 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:34.434 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:34.434 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.434 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.435 --rc genhtml_branch_coverage=1 00:40:34.435 --rc genhtml_function_coverage=1 00:40:34.435 --rc genhtml_legend=1 00:40:34.435 --rc geninfo_all_blocks=1 00:40:34.435 --rc geninfo_unexecuted_blocks=1 00:40:34.435 00:40:34.435 ' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.435 --rc genhtml_branch_coverage=1 00:40:34.435 --rc genhtml_function_coverage=1 00:40:34.435 --rc genhtml_legend=1 00:40:34.435 --rc geninfo_all_blocks=1 00:40:34.435 --rc geninfo_unexecuted_blocks=1 00:40:34.435 00:40:34.435 ' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.435 --rc genhtml_branch_coverage=1 00:40:34.435 --rc genhtml_function_coverage=1 00:40:34.435 --rc genhtml_legend=1 00:40:34.435 --rc geninfo_all_blocks=1 00:40:34.435 --rc geninfo_unexecuted_blocks=1 00:40:34.435 00:40:34.435 ' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:34.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.435 --rc genhtml_branch_coverage=1 00:40:34.435 --rc genhtml_function_coverage=1 00:40:34.435 --rc genhtml_legend=1 00:40:34.435 --rc geninfo_all_blocks=1 00:40:34.435 --rc geninfo_unexecuted_blocks=1 00:40:34.435 00:40:34.435 ' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:34.435 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.436 12:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:41.010 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:41.010 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:41.010 Found net devices under 0000:af:00.0: cvl_0_0 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:41.010 Found net devices under 0000:af:00.1: cvl_0_1 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:41.010 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:41.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:41.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:40:41.010 00:40:41.010 --- 10.0.0.2 ping statistics --- 00:40:41.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.011 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:41.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:41.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:40:41.011 00:40:41.011 --- 10.0.0.1 ping statistics --- 00:40:41.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.011 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=603965 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 603965 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 603965 ']' 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:41.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:41.011 12:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 [2024-12-13 12:46:07.939503] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:41.011 [2024-12-13 12:46:07.940452] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:41.011 [2024-12-13 12:46:07.940490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:41.011 [2024-12-13 12:46:08.016272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:41.011 [2024-12-13 12:46:08.040885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:41.011 [2024-12-13 12:46:08.040923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:41.011 [2024-12-13 12:46:08.040931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:41.011 [2024-12-13 12:46:08.040937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:41.011 [2024-12-13 12:46:08.040942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:41.011 [2024-12-13 12:46:08.042240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.011 [2024-12-13 12:46:08.042273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:41.011 [2024-12-13 12:46:08.042358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.011 [2024-12-13 12:46:08.042358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:41.011 [2024-12-13 12:46:08.105705] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:41.011 [2024-12-13 12:46:08.106500] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:41.011 [2024-12-13 12:46:08.106690] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:41.011 [2024-12-13 12:46:08.107141] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:41.011 [2024-12-13 12:46:08.107178] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 [2024-12-13 12:46:08.179391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 Malloc0 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 [2024-12-13 12:46:08.255447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:41.011 test case1: single bdev can't be used in multiple subsystems 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.011 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.011 [2024-12-13 12:46:08.287099] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:41.011 [2024-12-13 12:46:08.287118] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:41.011 [2024-12-13 12:46:08.287125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:41.011 request: 00:40:41.011 { 00:40:41.011 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:41.011 "namespace": { 00:40:41.011 "bdev_name": "Malloc0", 00:40:41.011 "no_auto_visible": false, 00:40:41.011 "hide_metadata": false 00:40:41.011 }, 00:40:41.012 "method": "nvmf_subsystem_add_ns", 00:40:41.012 "req_id": 1 00:40:41.012 } 00:40:41.012 Got JSON-RPC error response 00:40:41.012 response: 00:40:41.012 { 00:40:41.012 "code": -32602, 00:40:41.012 "message": "Invalid parameters" 00:40:41.012 } 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:41.012 Adding namespace failed - expected result. 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:41.012 test case2: host connect to nvmf target in multiple paths 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:41.012 [2024-12-13 12:46:08.299188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:41.012 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:41.271 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:41.271 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:41.271 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:41.271 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:41.271 12:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:43.177 12:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:43.450 [global] 00:40:43.450 thread=1 00:40:43.450 invalidate=1 00:40:43.450 rw=write 00:40:43.450 time_based=1 00:40:43.450 runtime=1 00:40:43.450 ioengine=libaio 00:40:43.450 direct=1 00:40:43.450 bs=4096 00:40:43.450 iodepth=1 00:40:43.450 norandommap=0 00:40:43.450 numjobs=1 00:40:43.450 00:40:43.450 verify_dump=1 00:40:43.450 verify_backlog=512 00:40:43.450 verify_state_save=0 00:40:43.450 do_verify=1 00:40:43.450 verify=crc32c-intel 00:40:43.450 [job0] 00:40:43.450 filename=/dev/nvme0n1 00:40:43.450 Could not set queue depth (nvme0n1) 00:40:43.711 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:43.711 fio-3.35 00:40:43.711 Starting 1 thread 00:40:44.647 00:40:44.647 job0: (groupid=0, jobs=1): err= 0: pid=604573: Fri Dec 13 12:46:12 2024 00:40:44.647 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:40:44.647 slat (nsec): min=6290, max=27683, avg=7222.92, stdev=841.78 00:40:44.647 clat (usec): min=193, max=475, avg=224.44, stdev=20.75 00:40:44.647 lat (usec): min=200, max=503, avg=231.66, stdev=20.84 00:40:44.647 clat percentiles (usec): 00:40:44.647 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 202], 20.00th=[ 206], 00:40:44.647 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 237], 00:40:44.647 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:40:44.647 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 297], 99.95th=[ 326], 00:40:44.647 | 99.99th=[ 478] 00:40:44.647 write: IOPS=2696, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:40:44.647 slat (nsec): min=9171, max=39087, avg=10288.42, stdev=1222.11 00:40:44.647 clat (usec): min=121, max=307, avg=136.17, stdev= 6.69 00:40:44.647 lat (usec): min=132, max=346, avg=146.45, stdev= 7.06 00:40:44.647 clat percentiles (usec): 00:40:44.647 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:40:44.647 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:40:44.647 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 145], 00:40:44.647 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 225], 99.95th=[ 269], 00:40:44.647 | 99.99th=[ 310] 00:40:44.647 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:40:44.647 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:44.647 lat (usec) : 250=94.71%, 500=5.29% 00:40:44.647 cpu : usr=3.10%, sys=4.30%, ctx=5260, majf=0, minf=1 00:40:44.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.647 issued rwts: total=2560,2699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:44.647 00:40:44.647 Run status group 0 (all jobs): 00:40:44.647 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:44.647 WRITE: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=10.5MiB (11.1MB), run=1001-1001msec 00:40:44.647 00:40:44.647 Disk stats (read/write): 00:40:44.647 nvme0n1: ios=2296/2560, merge=0/0, ticks=499/345, in_queue=844, util=91.48% 00:40:44.647 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:44.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.907 rmmod nvme_tcp 00:40:44.907 rmmod nvme_fabrics 00:40:44.907 rmmod nvme_keyring 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 603965 ']' 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 603965 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 603965 ']' 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 603965 00:40:44.907 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603965 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603965' 00:40:45.166 killing process with pid 603965 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 603965 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 603965 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:45.166 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:47.704 00:40:47.704 real 0m13.188s 00:40:47.704 user 0m24.814s 00:40:47.704 sys 0m6.087s 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:47.704 ************************************ 00:40:47.704 END TEST nvmf_nmic 00:40:47.704 ************************************ 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:47.704 ************************************ 00:40:47.704 START TEST nvmf_fio_target 00:40:47.704 ************************************ 00:40:47.704 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:47.704 * Looking for test storage... 00:40:47.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:47.704 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.705 --rc genhtml_branch_coverage=1 00:40:47.705 --rc genhtml_function_coverage=1 00:40:47.705 --rc genhtml_legend=1 00:40:47.705 --rc geninfo_all_blocks=1 00:40:47.705 --rc geninfo_unexecuted_blocks=1 00:40:47.705 00:40:47.705 ' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.705 --rc genhtml_branch_coverage=1 00:40:47.705 --rc genhtml_function_coverage=1 00:40:47.705 --rc genhtml_legend=1 00:40:47.705 --rc geninfo_all_blocks=1 00:40:47.705 --rc geninfo_unexecuted_blocks=1 00:40:47.705 00:40:47.705 ' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.705 --rc genhtml_branch_coverage=1 00:40:47.705 --rc genhtml_function_coverage=1 00:40:47.705 --rc genhtml_legend=1 00:40:47.705 --rc geninfo_all_blocks=1 00:40:47.705 --rc geninfo_unexecuted_blocks=1 00:40:47.705 00:40:47.705 ' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.705 --rc genhtml_branch_coverage=1 00:40:47.705 --rc genhtml_function_coverage=1 00:40:47.705 --rc genhtml_legend=1 00:40:47.705 --rc geninfo_all_blocks=1 00:40:47.705 --rc geninfo_unexecuted_blocks=1 00:40:47.705 00:40:47.705 ' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:47.705 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:47.706 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:54.282 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:54.282 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:54.282 Found net devices under 0000:af:00.0: cvl_0_0 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:54.282 Found net devices under 0000:af:00.1: cvl_0_1 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:54.282 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:54.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:54.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:40:54.317 00:40:54.317 --- 10.0.0.2 ping statistics --- 00:40:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:54.317 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:54.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:54.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:40:54.317 00:40:54.317 --- 10.0.0.1 ping statistics --- 00:40:54.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:54.317 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:54.317 12:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=608242 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 608242 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 608242 ']' 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.317 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:54.317 [2024-12-13 12:46:21.078772] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:54.317 [2024-12-13 12:46:21.079687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:54.317 [2024-12-13 12:46:21.079721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:54.317 [2024-12-13 12:46:21.156601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:54.317 [2024-12-13 12:46:21.179466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:54.317 [2024-12-13 12:46:21.179503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:54.317 [2024-12-13 12:46:21.179509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:54.317 [2024-12-13 12:46:21.179515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:54.318 [2024-12-13 12:46:21.179520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:54.318 [2024-12-13 12:46:21.180874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:54.318 [2024-12-13 12:46:21.180981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:54.318 [2024-12-13 12:46:21.181087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.318 [2024-12-13 12:46:21.181089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:54.318 [2024-12-13 12:46:21.244367] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:54.318 [2024-12-13 12:46:21.245026] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:54.318 [2024-12-13 12:46:21.245397] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:54.318 [2024-12-13 12:46:21.245849] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:54.318 [2024-12-13 12:46:21.245885] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:54.318 [2024-12-13 12:46:21.481846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:54.318 12:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.577 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:54.577 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.836 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:54.836 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:55.095 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:55.095 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:55.095 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:55.354 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:55.354 12:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:55.613 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:55.613 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:55.872 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:55.872 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:55.872 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:56.132 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:56.132 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:56.390 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:56.649 [2024-12-13 12:46:24.137756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:56.649 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:56.908 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:56.908 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:57.167 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:57.167 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:57.167 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:57.167 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:57.167 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:57.168 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:59.700 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:59.700 [global] 00:40:59.700 thread=1 00:40:59.700 invalidate=1 00:40:59.700 rw=write 00:40:59.700 time_based=1 00:40:59.700 runtime=1 00:40:59.700 ioengine=libaio 00:40:59.700 direct=1 00:40:59.700 bs=4096 00:40:59.700 iodepth=1 00:40:59.700 norandommap=0 00:40:59.700 numjobs=1 00:40:59.700 00:40:59.700 verify_dump=1 00:40:59.700 verify_backlog=512 00:40:59.700 verify_state_save=0 00:40:59.700 do_verify=1 00:40:59.700 verify=crc32c-intel 00:40:59.700 [job0] 00:40:59.700 filename=/dev/nvme0n1 00:40:59.700 [job1] 00:40:59.700 filename=/dev/nvme0n2 00:40:59.700 [job2] 00:40:59.700 filename=/dev/nvme0n3 00:40:59.700 [job3] 00:40:59.700 filename=/dev/nvme0n4 00:40:59.700 Could not set queue depth (nvme0n1) 00:40:59.700 Could not set queue depth (nvme0n2) 00:40:59.700 Could not set queue depth (nvme0n3) 00:40:59.700 Could not set queue depth (nvme0n4) 00:40:59.700 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:59.700 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:59.700 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:59.700 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:59.700 fio-3.35 00:40:59.700 Starting 4 threads 00:41:01.077 00:41:01.077 job0: (groupid=0, jobs=1): err= 0: pid=609352: Fri Dec 13 12:46:28 2024 00:41:01.077 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:41:01.077 slat (nsec): min=12534, max=42750, avg=20858.68, stdev=7560.42 00:41:01.077 clat (usec): min=40725, max=41077, avg=40962.62, stdev=78.36 00:41:01.077 lat (usec): min=40737, max=41093, avg=40983.48, stdev=78.56 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:41:01.077 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:01.077 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:01.077 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:01.077 | 99.99th=[41157] 00:41:01.077 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:41:01.077 slat (nsec): min=9800, max=66318, avg=15836.29, stdev=9190.58 00:41:01.077 clat (usec): min=139, max=336, avg=217.70, stdev=33.09 00:41:01.077 lat (usec): min=149, max=403, avg=233.54, stdev=34.55 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 188], 00:41:01.077 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 231], 60.00th=[ 237], 00:41:01.077 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 265], 00:41:01.077 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 338], 00:41:01.077 | 99.99th=[ 338] 00:41:01.077 bw ( KiB/s): min= 4096, max= 4096, per=24.07%, avg=4096.00, stdev= 0.00, samples=1 00:41:01.077 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:01.077 lat (usec) : 250=86.52%, 500=9.36% 00:41:01.077 lat (msec) : 50=4.12% 00:41:01.077 cpu : usr=0.29%, sys=0.68%, ctx=535, majf=0, minf=1 00:41:01.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.077 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.077 job1: (groupid=0, jobs=1): err= 0: pid=609353: Fri Dec 13 12:46:28 2024 00:41:01.077 read: IOPS=1216, BW=4865KiB/s (4982kB/s)(4928KiB/1013msec) 00:41:01.077 slat (nsec): min=6243, max=26072, avg=7378.44, stdev=1608.93 00:41:01.077 clat (usec): min=190, max=41817, avg=580.58, stdev=3663.69 00:41:01.077 lat (usec): min=197, max=41840, avg=587.96, stdev=3664.86 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 233], 00:41:01.077 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:41:01.077 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 302], 00:41:01.077 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:41:01.077 | 99.99th=[41681] 00:41:01.077 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:41:01.077 slat (nsec): min=8982, max=38820, avg=10697.77, stdev=1542.77 00:41:01.077 clat (usec): min=140, max=311, avg=172.86, stdev=21.07 00:41:01.077 lat (usec): min=150, max=350, avg=183.56, stdev=21.41 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:41:01.077 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:41:01.077 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 204], 95.00th=[ 223], 00:41:01.077 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 306], 99.95th=[ 314], 00:41:01.077 | 99.99th=[ 314] 00:41:01.077 bw ( KiB/s): min= 2376, max= 9912, per=36.11%, avg=6144.00, stdev=5328.76, samples=2 00:41:01.077 iops : min= 594, max= 2478, avg=1536.00, stdev=1332.19, samples=2 00:41:01.077 lat (usec) : 250=89.49%, 500=9.50%, 750=0.65% 00:41:01.077 lat (msec) : 50=0.36% 00:41:01.077 cpu : usr=1.38%, sys=2.57%, ctx=2768, majf=0, minf=2 00:41:01.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.077 issued rwts: total=1232,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.077 job2: (groupid=0, jobs=1): err= 0: pid=609354: Fri Dec 13 12:46:28 2024 00:41:01.077 read: IOPS=1055, BW=4223KiB/s (4324kB/s)(4324KiB/1024msec) 00:41:01.077 slat (nsec): min=6682, max=30638, avg=7712.04, stdev=1983.94 00:41:01.077 clat (usec): min=180, max=41501, avg=649.70, stdev=4098.34 00:41:01.077 lat (usec): min=187, max=41508, avg=657.42, stdev=4099.23 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:41:01.077 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 241], 00:41:01.077 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 281], 00:41:01.077 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:41:01.077 | 99.99th=[41681] 00:41:01.077 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:41:01.077 slat (nsec): min=9305, max=38417, avg=10542.19, stdev=1401.54 00:41:01.077 clat (usec): min=136, max=437, avg=189.60, stdev=39.68 00:41:01.077 lat (usec): min=146, max=447, avg=200.15, stdev=39.81 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:41:01.077 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 176], 00:41:01.077 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 245], 00:41:01.077 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 408], 99.95th=[ 437], 00:41:01.077 | 99.99th=[ 437] 00:41:01.077 bw ( KiB/s): min= 2712, max= 9576, per=36.11%, avg=6144.00, stdev=4853.58, samples=2 00:41:01.077 iops : min= 678, max= 2394, avg=1536.00, stdev=1213.40, samples=2 00:41:01.077 lat (usec) : 250=92.51%, 500=7.07% 00:41:01.077 lat (msec) : 50=0.42% 00:41:01.077 cpu : usr=0.98%, sys=2.64%, ctx=2617, majf=0, minf=1 00:41:01.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.077 issued rwts: total=1081,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.077 job3: (groupid=0, jobs=1): err= 0: pid=609355: Fri Dec 13 12:46:28 2024 00:41:01.077 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:41:01.077 slat (nsec): min=6573, max=24563, avg=8025.48, stdev=2997.74 00:41:01.077 clat (usec): min=179, max=41453, avg=1631.85, stdev=7439.34 00:41:01.077 lat (usec): min=187, max=41462, avg=1639.87, stdev=7442.17 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:41:01.077 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:41:01.077 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 249], 95.00th=[ 262], 00:41:01.077 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:41:01.077 | 99.99th=[41681] 00:41:01.077 write: IOPS=771, BW=3085KiB/s (3159kB/s)(3088KiB/1001msec); 0 zone resets 00:41:01.077 slat (nsec): min=9579, max=38734, avg=10734.20, stdev=1495.63 00:41:01.077 clat (usec): min=131, max=352, avg=193.39, stdev=47.15 00:41:01.077 lat (usec): min=141, max=391, avg=204.13, stdev=47.25 00:41:01.077 clat percentiles (usec): 00:41:01.077 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:41:01.077 | 30.00th=[ 145], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 239], 00:41:01.077 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 245], 00:41:01.077 | 99.00th=[ 253], 99.50th=[ 255], 99.90th=[ 355], 99.95th=[ 355], 00:41:01.077 | 99.99th=[ 355] 00:41:01.077 bw ( KiB/s): min= 4096, max= 4096, per=24.07%, avg=4096.00, stdev= 0.00, samples=1 00:41:01.077 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:01.077 lat (usec) : 250=95.25%, 500=3.27%, 750=0.08% 00:41:01.077 lat (msec) : 50=1.40% 00:41:01.077 cpu : usr=0.70%, sys=1.10%, ctx=1286, majf=0, minf=1 00:41:01.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.078 issued rwts: total=512,772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.078 00:41:01.078 Run status group 0 (all jobs): 00:41:01.078 READ: bw=10.9MiB/s (11.4MB/s), 86.0KiB/s-4865KiB/s (88.1kB/s-4982kB/s), io=11.1MiB (11.7MB), run=1001-1024msec 00:41:01.078 WRITE: bw=16.6MiB/s (17.4MB/s), 2002KiB/s-6065KiB/s (2050kB/s-6211kB/s), io=17.0MiB (17.8MB), run=1001-1024msec 00:41:01.078 00:41:01.078 Disk stats (read/write): 00:41:01.078 nvme0n1: ios=67/512, merge=0/0, ticks=724/111, in_queue=835, util=86.97% 00:41:01.078 nvme0n2: ios=1243/1536, merge=0/0, ticks=544/254, in_queue=798, util=86.98% 00:41:01.078 nvme0n3: ios=1046/1536, merge=0/0, ticks=524/290, in_queue=814, util=89.04% 00:41:01.078 nvme0n4: ios=84/512, merge=0/0, ticks=1543/110, in_queue=1653, util=98.21% 00:41:01.078 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:01.078 [global] 00:41:01.078 thread=1 00:41:01.078 invalidate=1 00:41:01.078 rw=randwrite 00:41:01.078 time_based=1 00:41:01.078 runtime=1 00:41:01.078 ioengine=libaio 00:41:01.078 direct=1 00:41:01.078 bs=4096 00:41:01.078 iodepth=1 00:41:01.078 norandommap=0 00:41:01.078 numjobs=1 00:41:01.078 00:41:01.078 verify_dump=1 00:41:01.078 verify_backlog=512 00:41:01.078 verify_state_save=0 00:41:01.078 do_verify=1 00:41:01.078 verify=crc32c-intel 00:41:01.078 [job0] 00:41:01.078 filename=/dev/nvme0n1 00:41:01.078 [job1] 00:41:01.078 filename=/dev/nvme0n2 00:41:01.078 [job2] 00:41:01.078 filename=/dev/nvme0n3 00:41:01.078 [job3] 00:41:01.078 filename=/dev/nvme0n4 00:41:01.078 Could not set queue depth (nvme0n1) 00:41:01.078 Could not set queue depth (nvme0n2) 00:41:01.078 Could not set queue depth (nvme0n3) 00:41:01.078 Could not set queue depth (nvme0n4) 00:41:01.405 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:01.405 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:01.405 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:01.405 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:01.405 fio-3.35 00:41:01.405 Starting 4 threads 00:41:02.437 00:41:02.437 job0: (groupid=0, jobs=1): err= 0: pid=609723: Fri Dec 13 12:46:30 2024 00:41:02.437 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:41:02.437 slat (nsec): min=7029, max=24521, avg=8204.71, stdev=1432.45 00:41:02.437 clat (usec): min=220, max=41340, avg=387.77, stdev=2262.33 00:41:02.437 lat (usec): min=228, max=41349, avg=395.97, stdev=2263.04 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:41:02.437 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 258], 00:41:02.437 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:41:02.437 | 99.00th=[ 469], 99.50th=[ 635], 99.90th=[41157], 99.95th=[41157], 00:41:02.437 | 99.99th=[41157] 00:41:02.437 write: IOPS=1847, BW=7389KiB/s (7566kB/s)(7396KiB/1001msec); 0 zone resets 00:41:02.437 slat (nsec): min=10135, max=39431, avg=11876.23, stdev=1941.54 00:41:02.437 clat (usec): min=147, max=344, avg=193.12, stdev=29.74 00:41:02.437 lat (usec): min=158, max=358, avg=205.00, stdev=29.85 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:41:02.437 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:41:02.437 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 273], 00:41:02.437 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 302], 99.95th=[ 347], 00:41:02.437 | 99.99th=[ 347] 00:41:02.437 bw ( KiB/s): min= 5864, max= 5864, per=25.35%, avg=5864.00, stdev= 0.00, samples=1 00:41:02.437 iops : min= 1466, max= 1466, avg=1466.00, stdev= 0.00, samples=1 00:41:02.437 lat (usec) : 250=67.62%, 500=32.14%, 750=0.09% 00:41:02.437 lat (msec) : 50=0.15% 00:41:02.437 cpu : usr=3.00%, sys=5.40%, ctx=3387, majf=0, minf=1 00:41:02.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.437 issued rwts: total=1536,1849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:02.437 job1: (groupid=0, jobs=1): err= 0: pid=609724: Fri Dec 13 12:46:30 2024 00:41:02.437 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:41:02.437 slat (nsec): min=10003, max=28336, avg=20649.73, stdev=4976.95 00:41:02.437 clat (usec): min=40910, max=41978, avg=41215.16, stdev=414.01 00:41:02.437 lat (usec): min=40932, max=42001, avg=41235.81, stdev=414.91 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:02.437 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:02.437 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:41:02.437 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:02.437 | 99.99th=[42206] 00:41:02.437 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:41:02.437 slat (nsec): min=9803, max=39000, avg=12225.64, stdev=2340.96 00:41:02.437 clat (usec): min=139, max=263, avg=183.80, stdev=17.23 00:41:02.437 lat (usec): min=151, max=278, avg=196.02, stdev=17.62 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:41:02.437 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:41:02.437 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 215], 00:41:02.437 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 265], 99.95th=[ 265], 00:41:02.437 | 99.99th=[ 265] 00:41:02.437 bw ( KiB/s): min= 4096, max= 4096, per=17.71%, avg=4096.00, stdev= 0.00, samples=1 00:41:02.437 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:02.437 lat (usec) : 250=95.32%, 500=0.56% 00:41:02.437 lat (msec) : 50=4.12% 00:41:02.437 cpu : usr=0.40%, sys=0.99%, ctx=534, majf=0, minf=1 00:41:02.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.437 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:02.437 job2: (groupid=0, jobs=1): err= 0: pid=609725: Fri Dec 13 12:46:30 2024 00:41:02.437 read: IOPS=2073, BW=8296KiB/s (8495kB/s)(8304KiB/1001msec) 00:41:02.437 slat (nsec): min=7349, max=40938, avg=8511.62, stdev=1781.40 00:41:02.437 clat (usec): min=191, max=392, avg=226.42, stdev=14.70 00:41:02.437 lat (usec): min=199, max=402, avg=234.93, stdev=14.78 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 215], 00:41:02.437 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:41:02.437 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:41:02.437 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 363], 00:41:02.437 | 99.99th=[ 392] 00:41:02.437 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:02.437 slat (nsec): min=10721, max=49210, avg=12803.18, stdev=2812.30 00:41:02.437 clat (usec): min=130, max=1887, avg=180.71, stdev=67.78 00:41:02.437 lat (usec): min=151, max=1899, avg=193.51, stdev=67.97 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:41:02.437 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 174], 00:41:02.437 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 217], 95.00th=[ 251], 00:41:02.437 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 1483], 99.95th=[ 1500], 00:41:02.437 | 99.99th=[ 1893] 00:41:02.437 bw ( KiB/s): min= 9800, max= 9800, per=42.37%, avg=9800.00, stdev= 0.00, samples=1 00:41:02.437 iops : min= 2450, max= 2450, avg=2450.00, stdev= 0.00, samples=1 00:41:02.437 lat (usec) : 250=94.22%, 500=5.67% 00:41:02.437 lat (msec) : 2=0.11% 00:41:02.437 cpu : usr=4.70%, sys=7.00%, ctx=4638, majf=0, minf=1 00:41:02.437 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.437 issued rwts: total=2076,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.437 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:02.437 job3: (groupid=0, jobs=1): err= 0: pid=609726: Fri Dec 13 12:46:30 2024 00:41:02.437 read: IOPS=560, BW=2241KiB/s (2295kB/s)(2304KiB/1028msec) 00:41:02.437 slat (nsec): min=6523, max=24720, avg=7739.63, stdev=2645.53 00:41:02.437 clat (usec): min=191, max=42250, avg=1436.95, stdev=6955.18 00:41:02.437 lat (usec): min=199, max=42257, avg=1444.69, stdev=6955.27 00:41:02.437 clat percentiles (usec): 00:41:02.437 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:41:02.437 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 221], 60.00th=[ 225], 00:41:02.437 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 273], 00:41:02.437 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:02.437 | 99.99th=[42206] 00:41:02.437 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:41:02.437 slat (nsec): min=8977, max=69716, avg=11719.98, stdev=3081.08 00:41:02.438 clat (usec): min=136, max=423, avg=174.77, stdev=24.47 00:41:02.438 lat (usec): min=146, max=439, avg=186.49, stdev=26.40 00:41:02.438 clat percentiles (usec): 00:41:02.438 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:41:02.438 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 178], 00:41:02.438 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 215], 00:41:02.438 | 99.00th=[ 243], 99.50th=[ 277], 99.90th=[ 367], 99.95th=[ 424], 00:41:02.438 | 99.99th=[ 424] 00:41:02.438 bw ( KiB/s): min= 8192, max= 8192, per=35.41%, avg=8192.00, stdev= 0.00, samples=1 00:41:02.438 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:02.438 lat (usec) : 250=96.12%, 500=2.81% 00:41:02.438 lat (msec) : 50=1.06% 00:41:02.438 cpu : usr=0.97%, sys=1.46%, ctx=1601, majf=0, minf=1 00:41:02.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.438 issued rwts: total=576,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:02.438 00:41:02.438 Run status group 0 (all jobs): 00:41:02.438 READ: bw=16.0MiB/s (16.8MB/s), 87.2KiB/s-8296KiB/s (89.3kB/s-8495kB/s), io=16.4MiB (17.2MB), run=1001-1028msec 00:41:02.438 WRITE: bw=22.6MiB/s (23.7MB/s), 2030KiB/s-9.99MiB/s (2078kB/s-10.5MB/s), io=23.2MiB (24.3MB), run=1001-1028msec 00:41:02.438 00:41:02.438 Disk stats (read/write): 00:41:02.438 nvme0n1: ios=1077/1536, merge=0/0, ticks=1294/289, in_queue=1583, util=89.18% 00:41:02.438 nvme0n2: ios=67/512, merge=0/0, ticks=767/90, in_queue=857, util=86.32% 00:41:02.438 nvme0n3: ios=1690/2048, merge=0/0, ticks=545/339, in_queue=884, util=97.29% 00:41:02.438 nvme0n4: ios=591/1024, merge=0/0, ticks=601/165, in_queue=766, util=90.08% 00:41:02.438 12:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:02.438 [global] 00:41:02.438 thread=1 00:41:02.438 invalidate=1 00:41:02.438 rw=write 00:41:02.438 time_based=1 00:41:02.438 runtime=1 00:41:02.438 ioengine=libaio 00:41:02.438 direct=1 00:41:02.438 bs=4096 00:41:02.438 iodepth=128 00:41:02.438 norandommap=0 00:41:02.438 numjobs=1 00:41:02.438 00:41:02.438 verify_dump=1 00:41:02.438 verify_backlog=512 00:41:02.438 verify_state_save=0 00:41:02.438 do_verify=1 00:41:02.438 verify=crc32c-intel 00:41:02.438 [job0] 00:41:02.438 filename=/dev/nvme0n1 00:41:02.438 [job1] 00:41:02.438 filename=/dev/nvme0n2 00:41:02.438 [job2] 00:41:02.438 filename=/dev/nvme0n3 00:41:02.438 [job3] 00:41:02.438 filename=/dev/nvme0n4 00:41:02.728 Could not set queue depth (nvme0n1) 00:41:02.728 Could not set queue depth (nvme0n2) 00:41:02.728 Could not set queue depth (nvme0n3) 00:41:02.728 Could not set queue depth (nvme0n4) 00:41:02.728 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.728 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.728 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.728 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.728 fio-3.35 00:41:02.728 Starting 4 threads 00:41:04.105 00:41:04.105 job0: (groupid=0, jobs=1): err= 0: pid=610092: Fri Dec 13 12:46:31 2024 00:41:04.105 read: IOPS=5200, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1009msec) 00:41:04.105 slat (nsec): min=1525, max=14116k, avg=96420.83, stdev=790736.14 00:41:04.105 clat (usec): min=3500, max=31510, avg=12763.24, stdev=4806.80 00:41:04.105 lat (usec): min=4870, max=31520, avg=12859.66, stdev=4866.84 00:41:04.105 clat percentiles (usec): 00:41:04.105 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 8356], 20.00th=[ 8848], 00:41:04.105 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[13042], 00:41:04.105 | 70.00th=[14877], 80.00th=[16909], 90.00th=[19792], 95.00th=[22152], 00:41:04.106 | 99.00th=[27395], 99.50th=[27919], 99.90th=[29492], 99.95th=[29754], 00:41:04.106 | 99.99th=[31589] 00:41:04.106 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:41:04.106 slat (usec): min=2, max=13871, avg=79.58, stdev=681.11 00:41:04.106 clat (usec): min=1571, max=29298, avg=10830.99, stdev=4265.68 00:41:04.106 lat (usec): min=1585, max=29331, avg=10910.57, stdev=4319.25 00:41:04.106 clat percentiles (usec): 00:41:04.106 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 7308], 20.00th=[ 7767], 00:41:04.106 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10421], 00:41:04.106 | 70.00th=[11994], 80.00th=[13304], 90.00th=[16319], 95.00th=[19268], 00:41:04.106 | 99.00th=[24773], 99.50th=[25822], 99.90th=[26084], 99.95th=[27132], 00:41:04.106 | 99.99th=[29230] 00:41:04.106 bw ( KiB/s): min=22080, max=22968, per=32.78%, avg=22524.00, stdev=627.91, samples=2 00:41:04.106 iops : min= 5520, max= 5742, avg=5631.00, stdev=156.98, samples=2 00:41:04.106 lat (msec) : 2=0.02%, 4=0.40%, 10=46.47%, 20=46.62%, 50=6.48% 00:41:04.106 cpu : usr=5.65%, sys=6.25%, ctx=307, majf=0, minf=2 00:41:04.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:41:04.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.106 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.106 job1: (groupid=0, jobs=1): err= 0: pid=610094: Fri Dec 13 12:46:31 2024 00:41:04.106 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:41:04.106 slat (nsec): min=1482, max=21822k, avg=122699.66, stdev=1003986.28 00:41:04.106 clat (usec): min=5830, max=61559, avg=16048.67, stdev=9680.51 00:41:04.106 lat (usec): min=5833, max=63742, avg=16171.37, stdev=9759.79 00:41:04.106 clat percentiles (usec): 00:41:04.106 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7832], 20.00th=[ 8455], 00:41:04.106 | 30.00th=[10159], 40.00th=[11994], 50.00th=[12911], 60.00th=[13173], 00:41:04.106 | 70.00th=[16319], 80.00th=[22414], 90.00th=[31065], 95.00th=[35914], 00:41:04.106 | 99.00th=[45351], 99.50th=[51643], 99.90th=[57934], 99.95th=[61604], 00:41:04.106 | 99.99th=[61604] 00:41:04.106 write: IOPS=4292, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1008msec); 0 zone resets 00:41:04.106 slat (usec): min=2, max=18905, avg=108.09, stdev=789.55 00:41:04.106 clat (usec): min=3933, max=54691, avg=14305.44, stdev=9062.43 00:41:04.106 lat (usec): min=3950, max=54702, avg=14413.53, stdev=9133.62 00:41:04.106 clat percentiles (usec): 00:41:04.106 | 1.00th=[ 5276], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8160], 00:41:04.106 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11207], 60.00th=[12256], 00:41:04.106 | 70.00th=[12780], 80.00th=[18220], 90.00th=[30016], 95.00th=[36963], 00:41:04.106 | 99.00th=[50070], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:41:04.106 | 99.99th=[54789] 00:41:04.106 bw ( KiB/s): min=13136, max=20464, per=24.45%, avg=16800.00, stdev=5181.68, samples=2 00:41:04.106 iops : min= 3284, max= 5116, avg=4200.00, stdev=1295.42, samples=2 00:41:04.106 lat (msec) : 4=0.06%, 10=31.13%, 20=47.88%, 50=20.21%, 100=0.72% 00:41:04.106 cpu : usr=4.17%, sys=5.56%, ctx=320, majf=0, minf=1 00:41:04.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:04.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.106 issued rwts: total=4096,4327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.106 job2: (groupid=0, jobs=1): err= 0: pid=610102: Fri Dec 13 12:46:31 2024 00:41:04.106 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:41:04.106 slat (nsec): min=1947, max=19664k, avg=163150.14, stdev=1170511.10 00:41:04.106 clat (usec): min=5319, max=65383, avg=19730.77, stdev=11402.06 00:41:04.106 lat (usec): min=5330, max=65402, avg=19893.92, stdev=11508.36 00:41:04.106 clat percentiles (usec): 00:41:04.106 | 1.00th=[ 9241], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:41:04.106 | 30.00th=[11076], 40.00th=[13829], 50.00th=[15795], 60.00th=[18220], 00:41:04.106 | 70.00th=[23200], 80.00th=[26346], 90.00th=[38011], 95.00th=[45876], 00:41:04.106 | 99.00th=[53216], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:41:04.106 | 99.99th=[65274] 00:41:04.106 write: IOPS=2261, BW=9045KiB/s (9262kB/s)(9144KiB/1011msec); 0 zone resets 00:41:04.106 slat (usec): min=2, max=68877, avg=268.61, stdev=2380.59 00:41:04.106 clat (usec): min=292, max=186084, avg=29234.27, stdev=26889.15 00:41:04.106 lat (usec): min=323, max=186120, avg=29502.88, stdev=27230.36 00:41:04.106 clat percentiles (msec): 00:41:04.106 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:41:04.106 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 24], 00:41:04.106 | 70.00th=[ 49], 80.00th=[ 53], 90.00th=[ 66], 95.00th=[ 82], 00:41:04.106 | 99.00th=[ 100], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:41:04.106 | 99.99th=[ 186] 00:41:04.106 bw ( KiB/s): min= 6528, max=10744, per=12.57%, avg=8636.00, stdev=2981.16, samples=2 00:41:04.106 iops : min= 1632, max= 2686, avg=2159.00, stdev=745.29, samples=2 00:41:04.106 lat (usec) : 500=0.05%, 750=0.16% 00:41:04.106 lat (msec) : 4=2.10%, 10=20.21%, 20=38.86%, 50=23.12%, 100=15.16% 00:41:04.106 lat (msec) : 250=0.35% 00:41:04.106 cpu : usr=1.78%, sys=2.57%, ctx=205, majf=0, minf=1 00:41:04.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:41:04.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.106 issued rwts: total=2048,2286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.106 job3: (groupid=0, jobs=1): err= 0: pid=610103: Fri Dec 13 12:46:31 2024 00:41:04.106 read: IOPS=4692, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1009msec) 00:41:04.106 slat (nsec): min=1173, max=20640k, avg=106321.72, stdev=956342.36 00:41:04.106 clat (usec): min=1368, max=39080, avg=14827.72, stdev=5176.52 00:41:04.106 lat (usec): min=2471, max=43050, avg=14934.05, stdev=5250.09 00:41:04.106 clat percentiles (usec): 00:41:04.106 | 1.00th=[ 4293], 5.00th=[ 7308], 10.00th=[10028], 20.00th=[10814], 00:41:04.106 | 30.00th=[11338], 40.00th=[12911], 50.00th=[14222], 60.00th=[15533], 00:41:04.106 | 70.00th=[16581], 80.00th=[18482], 90.00th=[22152], 95.00th=[23987], 00:41:04.106 | 99.00th=[28967], 99.50th=[32637], 99.90th=[36963], 99.95th=[37487], 00:41:04.106 | 99.99th=[39060] 00:41:04.106 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:41:04.106 slat (nsec): min=1986, max=15727k, avg=81010.87, stdev=682498.45 00:41:04.106 clat (usec): min=848, max=35888, avg=11190.52, stdev=3314.22 00:41:04.106 lat (usec): min=858, max=35895, avg=11271.53, stdev=3371.94 00:41:04.106 clat percentiles (usec): 00:41:04.106 | 1.00th=[ 3294], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 8717], 00:41:04.106 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11207], 60.00th=[11469], 00:41:04.106 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15401], 95.00th=[16712], 00:41:04.106 | 99.00th=[22676], 99.50th=[23462], 99.90th=[26084], 99.95th=[26870], 00:41:04.106 | 99.99th=[35914] 00:41:04.106 bw ( KiB/s): min=20472, max=20480, per=29.80%, avg=20476.00, stdev= 5.66, samples=2 00:41:04.106 iops : min= 5118, max= 5120, avg=5119.00, stdev= 1.41, samples=2 00:41:04.106 lat (usec) : 1000=0.05% 00:41:04.106 lat (msec) : 2=0.01%, 4=1.36%, 10=22.51%, 20=68.36%, 50=7.71% 00:41:04.106 cpu : usr=3.08%, sys=4.96%, ctx=408, majf=0, minf=1 00:41:04.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:04.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.106 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.106 00:41:04.106 Run status group 0 (all jobs): 00:41:04.106 READ: bw=62.3MiB/s (65.3MB/s), 8103KiB/s-20.3MiB/s (8297kB/s-21.3MB/s), io=63.0MiB (66.1MB), run=1008-1011msec 00:41:04.106 WRITE: bw=67.1MiB/s (70.4MB/s), 9045KiB/s-21.8MiB/s (9262kB/s-22.9MB/s), io=67.8MiB (71.1MB), run=1008-1011msec 00:41:04.106 00:41:04.106 Disk stats (read/write): 00:41:04.106 nvme0n1: ios=4630/4701, merge=0/0, ticks=53735/45261, in_queue=98996, util=96.79% 00:41:04.106 nvme0n2: ios=3611/3688, merge=0/0, ticks=27830/22170, in_queue=50000, util=96.10% 00:41:04.106 nvme0n3: ios=1051/1430, merge=0/0, ticks=21243/49143, in_queue=70386, util=98.49% 00:41:04.106 nvme0n4: ios=4153/4183, merge=0/0, ticks=54632/41828, in_queue=96460, util=98.02% 00:41:04.106 12:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:04.106 [global] 00:41:04.106 thread=1 00:41:04.106 invalidate=1 00:41:04.106 rw=randwrite 00:41:04.106 time_based=1 00:41:04.106 runtime=1 00:41:04.106 ioengine=libaio 00:41:04.106 direct=1 00:41:04.106 bs=4096 00:41:04.106 iodepth=128 00:41:04.106 norandommap=0 00:41:04.106 numjobs=1 00:41:04.106 00:41:04.106 verify_dump=1 00:41:04.106 verify_backlog=512 00:41:04.106 verify_state_save=0 00:41:04.106 do_verify=1 00:41:04.106 verify=crc32c-intel 00:41:04.106 [job0] 00:41:04.106 filename=/dev/nvme0n1 00:41:04.106 [job1] 00:41:04.106 filename=/dev/nvme0n2 00:41:04.106 [job2] 00:41:04.106 filename=/dev/nvme0n3 00:41:04.106 [job3] 00:41:04.106 filename=/dev/nvme0n4 00:41:04.106 Could not set queue depth (nvme0n1) 00:41:04.106 Could not set queue depth (nvme0n2) 00:41:04.106 Could not set queue depth (nvme0n3) 00:41:04.106 Could not set queue depth (nvme0n4) 00:41:04.366 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:04.366 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:04.366 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:04.366 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:04.366 fio-3.35 00:41:04.366 Starting 4 threads 00:41:05.743 00:41:05.743 job0: (groupid=0, jobs=1): err= 0: pid=610464: Fri Dec 13 12:46:33 2024 00:41:05.743 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:41:05.743 slat (nsec): min=1613, max=17346k, avg=148261.33, stdev=1001073.34 00:41:05.743 clat (usec): min=9334, max=58216, avg=19642.69, stdev=9238.83 00:41:05.743 lat (usec): min=9341, max=66752, avg=19790.95, stdev=9324.51 00:41:05.743 clat percentiles (usec): 00:41:05.743 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[11469], 20.00th=[12518], 00:41:05.743 | 30.00th=[13042], 40.00th=[14746], 50.00th=[17957], 60.00th=[18220], 00:41:05.743 | 70.00th=[20317], 80.00th=[25297], 90.00th=[34866], 95.00th=[40109], 00:41:05.743 | 99.00th=[46400], 99.50th=[49546], 99.90th=[55313], 99.95th=[55313], 00:41:05.743 | 99.99th=[58459] 00:41:05.743 write: IOPS=2882, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1010msec); 0 zone resets 00:41:05.743 slat (usec): min=2, max=21890, avg=206.12, stdev=1087.06 00:41:05.743 clat (msec): min=5, max=103, avg=25.56, stdev=17.98 00:41:05.743 lat (msec): min=5, max=103, avg=25.77, stdev=18.08 00:41:05.743 clat percentiles (msec): 00:41:05.743 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:41:05.743 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 22], 00:41:05.743 | 70.00th=[ 25], 80.00th=[ 30], 90.00th=[ 42], 95.00th=[ 77], 00:41:05.743 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 104], 99.95th=[ 104], 00:41:05.743 | 99.99th=[ 104] 00:41:05.744 bw ( KiB/s): min=11000, max=11272, per=17.07%, avg=11136.00, stdev=192.33, samples=2 00:41:05.744 iops : min= 2750, max= 2818, avg=2784.00, stdev=48.08, samples=2 00:41:05.744 lat (msec) : 10=4.13%, 20=53.45%, 50=37.96%, 100=3.95%, 250=0.51% 00:41:05.744 cpu : usr=2.28%, sys=4.06%, ctx=269, majf=0, minf=1 00:41:05.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:05.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:05.744 issued rwts: total=2560,2911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:05.744 job1: (groupid=0, jobs=1): err= 0: pid=610465: Fri Dec 13 12:46:33 2024 00:41:05.744 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:41:05.744 slat (nsec): min=1197, max=14301k, avg=106102.80, stdev=758237.85 00:41:05.744 clat (usec): min=4988, max=55872, avg=13040.85, stdev=7472.89 00:41:05.744 lat (usec): min=4999, max=55882, avg=13146.95, stdev=7548.10 00:41:05.744 clat percentiles (usec): 00:41:05.744 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8356], 00:41:05.744 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10945], 00:41:05.744 | 70.00th=[12518], 80.00th=[16057], 90.00th=[21627], 95.00th=[31327], 00:41:05.744 | 99.00th=[42206], 99.50th=[45351], 99.90th=[51119], 99.95th=[55837], 00:41:05.744 | 99.99th=[55837] 00:41:05.744 write: IOPS=4322, BW=16.9MiB/s (17.7MB/s)(17.1MiB/1012msec); 0 zone resets 00:41:05.744 slat (usec): min=2, max=8296, avg=121.14, stdev=655.35 00:41:05.744 clat (usec): min=3044, max=77180, avg=17075.89, stdev=17147.61 00:41:05.744 lat (usec): min=3055, max=77191, avg=17197.03, stdev=17270.19 00:41:05.744 clat percentiles (usec): 00:41:05.744 | 1.00th=[ 5080], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8356], 00:41:05.744 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10290], 00:41:05.744 | 70.00th=[11994], 80.00th=[24511], 90.00th=[41157], 95.00th=[68682], 00:41:05.744 | 99.00th=[74974], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:41:05.744 | 99.99th=[77071] 00:41:05.744 bw ( KiB/s): min=10448, max=23528, per=26.05%, avg=16988.00, stdev=9248.96, samples=2 00:41:05.744 iops : min= 2612, max= 5882, avg=4247.00, stdev=2312.24, samples=2 00:41:05.744 lat (msec) : 4=0.32%, 10=50.54%, 20=31.24%, 50=13.40%, 100=4.50% 00:41:05.744 cpu : usr=2.97%, sys=4.95%, ctx=405, majf=0, minf=1 00:41:05.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:05.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:05.744 issued rwts: total=4096,4374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:05.744 job2: (groupid=0, jobs=1): err= 0: pid=610466: Fri Dec 13 12:46:33 2024 00:41:05.744 read: IOPS=5401, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1007msec) 00:41:05.744 slat (nsec): min=1336, max=25258k, avg=87948.84, stdev=780442.45 00:41:05.744 clat (usec): min=2283, max=78902, avg=11087.98, stdev=5865.59 00:41:05.744 lat (usec): min=2289, max=78909, avg=11175.93, stdev=5908.75 00:41:05.744 clat percentiles (usec): 00:41:05.744 | 1.00th=[ 5800], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:41:05.744 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9634], 00:41:05.744 | 70.00th=[11600], 80.00th=[14484], 90.00th=[16450], 95.00th=[21103], 00:41:05.744 | 99.00th=[28705], 99.50th=[31065], 99.90th=[79168], 99.95th=[79168], 00:41:05.744 | 99.99th=[79168] 00:41:05.744 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:41:05.744 slat (usec): min=2, max=44554, avg=87.10, stdev=867.42 00:41:05.744 clat (usec): min=1132, max=72226, avg=11927.17, stdev=10710.42 00:41:05.744 lat (usec): min=1156, max=73088, avg=12014.28, stdev=10779.29 00:41:05.744 clat percentiles (usec): 00:41:05.744 | 1.00th=[ 3163], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 7308], 00:41:05.744 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9110], 00:41:05.744 | 70.00th=[ 9241], 80.00th=[11863], 90.00th=[23462], 95.00th=[32375], 00:41:05.744 | 99.00th=[67634], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:41:05.744 | 99.99th=[71828] 00:41:05.744 bw ( KiB/s): min=16384, max=28672, per=34.54%, avg=22528.00, stdev=8688.93, samples=2 00:41:05.744 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:41:05.744 lat (msec) : 2=0.09%, 4=1.27%, 10=66.41%, 20=23.11%, 50=7.84% 00:41:05.744 lat (msec) : 100=1.28% 00:41:05.744 cpu : usr=3.98%, sys=7.06%, ctx=457, majf=0, minf=1 00:41:05.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:41:05.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:05.744 issued rwts: total=5439,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:05.744 job3: (groupid=0, jobs=1): err= 0: pid=610467: Fri Dec 13 12:46:33 2024 00:41:05.744 read: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1009msec) 00:41:05.744 slat (nsec): min=1925, max=16331k, avg=131137.17, stdev=1034146.99 00:41:05.744 clat (usec): min=4385, max=47707, avg=17201.04, stdev=7526.33 00:41:05.744 lat (usec): min=4397, max=47716, avg=17332.18, stdev=7611.13 00:41:05.744 clat percentiles (usec): 00:41:05.744 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[10683], 00:41:05.744 | 30.00th=[11338], 40.00th=[13829], 50.00th=[16450], 60.00th=[18482], 00:41:05.744 | 70.00th=[20317], 80.00th=[22414], 90.00th=[26346], 95.00th=[32900], 00:41:05.744 | 99.00th=[42206], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:41:05.744 | 99.99th=[47449] 00:41:05.744 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:41:05.744 slat (usec): min=2, max=10518, avg=152.14, stdev=851.62 00:41:05.744 clat (usec): min=369, max=96938, avg=20758.61, stdev=17500.68 00:41:05.744 lat (usec): min=399, max=96949, avg=20910.74, stdev=17627.95 00:41:05.744 clat percentiles (usec): 00:41:05.744 | 1.00th=[ 2024], 5.00th=[ 6194], 10.00th=[ 7439], 20.00th=[ 9110], 00:41:05.744 | 30.00th=[10552], 40.00th=[11469], 50.00th=[16909], 60.00th=[18482], 00:41:05.744 | 70.00th=[24511], 80.00th=[26084], 90.00th=[38536], 95.00th=[64226], 00:41:05.744 | 99.00th=[90702], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:41:05.744 | 99.99th=[96994] 00:41:05.744 bw ( KiB/s): min= 8968, max=18936, per=21.39%, avg=13952.00, stdev=7048.44, samples=2 00:41:05.744 iops : min= 2242, max= 4734, avg=3488.00, stdev=1762.11, samples=2 00:41:05.744 lat (usec) : 500=0.04% 00:41:05.744 lat (msec) : 2=0.42%, 4=0.90%, 10=17.09%, 20=46.82%, 50=31.28% 00:41:05.744 lat (msec) : 100=3.44% 00:41:05.744 cpu : usr=3.17%, sys=4.56%, ctx=264, majf=0, minf=1 00:41:05.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:41:05.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:05.744 issued rwts: total=3103,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:05.744 00:41:05.744 Run status group 0 (all jobs): 00:41:05.744 READ: bw=58.7MiB/s (61.5MB/s), 9.90MiB/s-21.1MiB/s (10.4MB/s-22.1MB/s), io=59.4MiB (62.3MB), run=1007-1012msec 00:41:05.744 WRITE: bw=63.7MiB/s (66.8MB/s), 11.3MiB/s-21.8MiB/s (11.8MB/s-22.9MB/s), io=64.5MiB (67.6MB), run=1007-1012msec 00:41:05.744 00:41:05.744 Disk stats (read/write): 00:41:05.744 nvme0n1: ios=2461/2560, merge=0/0, ticks=22560/29602, in_queue=52162, util=97.49% 00:41:05.744 nvme0n2: ios=3092/3584, merge=0/0, ticks=25398/51879, in_queue=77277, util=96.85% 00:41:05.744 nvme0n3: ios=4241/4608, merge=0/0, ticks=43856/40456, in_queue=84312, util=97.61% 00:41:05.744 nvme0n4: ios=3115/3231, merge=0/0, ticks=37991/37817, in_queue=75808, util=96.02% 00:41:05.744 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:05.744 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=610687 00:41:05.744 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:05.744 12:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:05.744 [global] 00:41:05.744 thread=1 00:41:05.744 invalidate=1 00:41:05.744 rw=read 00:41:05.744 time_based=1 00:41:05.744 runtime=10 00:41:05.744 ioengine=libaio 00:41:05.744 direct=1 00:41:05.744 bs=4096 00:41:05.744 iodepth=1 00:41:05.744 norandommap=1 00:41:05.744 numjobs=1 00:41:05.744 00:41:05.744 [job0] 00:41:05.744 filename=/dev/nvme0n1 00:41:05.744 [job1] 00:41:05.744 filename=/dev/nvme0n2 00:41:05.744 [job2] 00:41:05.744 filename=/dev/nvme0n3 00:41:05.744 [job3] 00:41:05.744 filename=/dev/nvme0n4 00:41:05.744 Could not set queue depth (nvme0n1) 00:41:05.744 Could not set queue depth (nvme0n2) 00:41:05.744 Could not set queue depth (nvme0n3) 00:41:05.744 Could not set queue depth (nvme0n4) 00:41:06.004 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.004 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.004 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.004 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.004 fio-3.35 00:41:06.004 Starting 4 threads 00:41:08.538 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:08.797 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:08.797 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1216512, buflen=4096 00:41:08.797 fio: pid=610834, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:09.056 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.056 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:09.056 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=524288, buflen=4096 00:41:09.056 fio: pid=610831, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:09.314 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.315 12:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:09.315 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=839680, buflen=4096 00:41:09.315 fio: pid=610829, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:09.574 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57131008, buflen=4096 00:41:09.574 fio: pid=610830, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:09.574 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.574 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:09.574 00:41:09.574 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610829: Fri Dec 13 12:46:37 2024 00:41:09.574 read: IOPS=65, BW=260KiB/s (266kB/s)(820KiB/3154msec) 00:41:09.574 slat (usec): min=2, max=24724, avg=152.73, stdev=1752.09 00:41:09.574 clat (usec): min=194, max=42223, avg=15123.76, stdev=19652.21 00:41:09.574 lat (usec): min=200, max=66930, avg=15253.95, stdev=19892.66 00:41:09.574 clat percentiles (usec): 00:41:09.574 | 1.00th=[ 198], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:41:09.574 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 260], 00:41:09.574 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:41:09.574 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:09.574 | 99.99th=[42206] 00:41:09.574 bw ( KiB/s): min= 176, max= 320, per=1.51%, avg=262.83, stdev=48.91, samples=6 00:41:09.574 iops : min= 44, max= 80, avg=65.67, stdev=12.23, samples=6 00:41:09.574 lat (usec) : 250=56.31%, 500=6.31%, 750=0.49% 00:41:09.574 lat (msec) : 50=36.41% 00:41:09.574 cpu : usr=0.00%, sys=0.10%, ctx=210, majf=0, minf=1 00:41:09.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.574 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.574 issued rwts: total=206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.574 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610830: Fri Dec 13 12:46:37 2024 00:41:09.574 read: IOPS=4140, BW=16.2MiB/s (17.0MB/s)(54.5MiB/3369msec) 00:41:09.574 slat (usec): min=3, max=15677, avg= 9.28, stdev=161.30 00:41:09.574 clat (usec): min=169, max=41013, avg=229.22, stdev=1064.48 00:41:09.574 lat (usec): min=180, max=41074, avg=238.50, stdev=1077.25 00:41:09.574 clat percentiles (usec): 00:41:09.574 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:41:09.574 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 202], 00:41:09.574 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 219], 00:41:09.574 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 570], 99.95th=[41157], 00:41:09.574 | 99.99th=[41157] 00:41:09.574 bw ( KiB/s): min= 9156, max=19296, per=97.80%, avg=16928.67, stdev=4005.20, samples=6 00:41:09.574 iops : min= 2289, max= 4824, avg=4232.17, stdev=1001.30, samples=6 00:41:09.574 lat (usec) : 250=99.09%, 500=0.80%, 750=0.01% 00:41:09.574 lat (msec) : 2=0.02%, 50=0.07% 00:41:09.574 cpu : usr=1.16%, sys=5.05%, ctx=13953, majf=0, minf=2 00:41:09.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.574 issued rwts: total=13949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.574 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610831: Fri Dec 13 12:46:37 2024 00:41:09.574 read: IOPS=43, BW=174KiB/s (178kB/s)(512KiB/2946msec) 00:41:09.574 slat (nsec): min=6885, max=61835, avg=16979.74, stdev=8080.75 00:41:09.574 clat (usec): min=211, max=41945, avg=22827.09, stdev=20305.03 00:41:09.574 lat (usec): min=220, max=41968, avg=22844.03, stdev=20305.53 00:41:09.574 clat percentiles (usec): 00:41:09.574 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 243], 00:41:09.574 | 30.00th=[ 255], 40.00th=[ 289], 50.00th=[40633], 60.00th=[40633], 00:41:09.574 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:09.574 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:09.574 | 99.99th=[42206] 00:41:09.574 bw ( KiB/s): min= 104, max= 328, per=1.08%, avg=187.20, stdev=87.38, samples=5 00:41:09.574 iops : min= 26, max= 82, avg=46.80, stdev=21.84, samples=5 00:41:09.574 lat (usec) : 250=25.58%, 500=18.60% 00:41:09.574 lat (msec) : 50=55.04% 00:41:09.574 cpu : usr=0.00%, sys=0.14%, ctx=130, majf=0, minf=2 00:41:09.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.574 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.574 issued rwts: total=129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.574 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610834: Fri Dec 13 12:46:37 2024 00:41:09.574 read: IOPS=108, BW=433KiB/s (443kB/s)(1188KiB/2744msec) 00:41:09.574 slat (nsec): min=6860, max=30836, avg=10939.22, stdev=6250.28 00:41:09.574 clat (usec): min=174, max=42038, avg=9152.97, stdev=16889.12 00:41:09.574 lat (usec): min=181, max=42060, avg=9163.92, stdev=16894.96 00:41:09.574 clat percentiles (usec): 00:41:09.574 | 1.00th=[ 178], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:41:09.574 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:41:09.574 | 70.00th=[ 253], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:41:09.574 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:09.574 | 99.99th=[42206] 00:41:09.574 bw ( KiB/s): min= 96, max= 1760, per=2.69%, avg=465.60, stdev=727.96, samples=5 00:41:09.574 iops : min= 24, max= 440, avg=116.40, stdev=181.99, samples=5 00:41:09.575 lat (usec) : 250=69.13%, 500=8.72% 00:41:09.575 lat (msec) : 50=21.81% 00:41:09.575 cpu : usr=0.15%, sys=0.07%, ctx=298, majf=0, minf=2 00:41:09.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.575 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.575 issued rwts: total=298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.575 00:41:09.575 Run status group 0 (all jobs): 00:41:09.575 READ: bw=16.9MiB/s (17.7MB/s), 174KiB/s-16.2MiB/s (178kB/s-17.0MB/s), io=56.9MiB (59.7MB), run=2744-3369msec 00:41:09.575 00:41:09.575 Disk stats (read/write): 00:41:09.575 nvme0n1: ios=204/0, merge=0/0, ticks=3058/0, in_queue=3058, util=94.98% 00:41:09.575 nvme0n2: ios=13948/0, merge=0/0, ticks=3081/0, in_queue=3081, util=95.69% 00:41:09.575 nvme0n3: ios=126/0, merge=0/0, ticks=2842/0, in_queue=2842, util=96.52% 00:41:09.575 nvme0n4: ios=294/0, merge=0/0, ticks=2594/0, in_queue=2594, util=96.45% 00:41:09.834 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.834 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:09.834 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.834 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:10.093 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:10.093 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:10.352 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:10.352 12:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 610687 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:10.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:10.612 nvmf hotplug test: fio failed as expected 00:41:10.612 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:10.871 rmmod nvme_tcp 00:41:10.871 rmmod nvme_fabrics 00:41:10.871 rmmod nvme_keyring 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 608242 ']' 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 608242 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 608242 ']' 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 608242 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:10.871 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608242 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608242' 00:41:11.130 killing process with pid 608242 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 608242 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 608242 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:11.130 12:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:13.669 00:41:13.669 real 0m25.834s 00:41:13.669 user 1m31.615s 00:41:13.669 sys 0m10.758s 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:13.669 ************************************ 00:41:13.669 END TEST nvmf_fio_target 00:41:13.669 ************************************ 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:13.669 ************************************ 00:41:13.669 START TEST nvmf_bdevio 00:41:13.669 ************************************ 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:13.669 * Looking for test storage... 00:41:13.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:13.669 12:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.669 --rc genhtml_branch_coverage=1 00:41:13.669 --rc genhtml_function_coverage=1 00:41:13.669 --rc genhtml_legend=1 00:41:13.669 --rc geninfo_all_blocks=1 00:41:13.669 --rc geninfo_unexecuted_blocks=1 00:41:13.669 00:41:13.669 ' 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.669 --rc genhtml_branch_coverage=1 00:41:13.669 --rc genhtml_function_coverage=1 00:41:13.669 --rc genhtml_legend=1 00:41:13.669 --rc geninfo_all_blocks=1 00:41:13.669 --rc geninfo_unexecuted_blocks=1 00:41:13.669 00:41:13.669 ' 00:41:13.669 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.669 --rc genhtml_branch_coverage=1 00:41:13.670 --rc genhtml_function_coverage=1 00:41:13.670 --rc genhtml_legend=1 00:41:13.670 --rc geninfo_all_blocks=1 00:41:13.670 --rc geninfo_unexecuted_blocks=1 00:41:13.670 00:41:13.670 ' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:13.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.670 --rc genhtml_branch_coverage=1 00:41:13.670 --rc genhtml_function_coverage=1 00:41:13.670 --rc genhtml_legend=1 00:41:13.670 --rc geninfo_all_blocks=1 00:41:13.670 --rc geninfo_unexecuted_blocks=1 00:41:13.670 00:41:13.670 ' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:13.670 12:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:20.242 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:20.242 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:20.242 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:20.243 Found net devices under 0000:af:00.0: cvl_0_0 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:20.243 Found net devices under 0000:af:00.1: cvl_0_1 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:20.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:20.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:41:20.243 00:41:20.243 --- 10.0.0.2 ping statistics --- 00:41:20.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:20.243 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:20.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:20.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:41:20.243 00:41:20.243 --- 10.0.0.1 ping statistics --- 00:41:20.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:20.243 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=615024 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 615024 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 615024 ']' 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:20.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:20.243 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.243 [2024-12-13 12:46:47.015352] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:20.243 [2024-12-13 12:46:47.016258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:20.243 [2024-12-13 12:46:47.016292] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:20.243 [2024-12-13 12:46:47.092325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:20.243 [2024-12-13 12:46:47.115120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:20.243 [2024-12-13 12:46:47.115160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:20.243 [2024-12-13 12:46:47.115167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:20.243 [2024-12-13 12:46:47.115173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:20.243 [2024-12-13 12:46:47.115178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:20.243 [2024-12-13 12:46:47.116472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:20.243 [2024-12-13 12:46:47.116567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:20.243 [2024-12-13 12:46:47.116676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:20.243 [2024-12-13 12:46:47.116677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:20.243 [2024-12-13 12:46:47.178529] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:20.243 [2024-12-13 12:46:47.179677] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:20.243 [2024-12-13 12:46:47.179873] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:20.243 [2024-12-13 12:46:47.180240] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:20.243 [2024-12-13 12:46:47.180280] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:20.243 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.244 [2024-12-13 12:46:47.245840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.244 Malloc0 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.244 [2024-12-13 12:46:47.333445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.244 { 00:41:20.244 "params": { 00:41:20.244 "name": "Nvme$subsystem", 00:41:20.244 "trtype": "$TEST_TRANSPORT", 00:41:20.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.244 "adrfam": "ipv4", 00:41:20.244 "trsvcid": "$NVMF_PORT", 00:41:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.244 "hdgst": ${hdgst:-false}, 00:41:20.244 "ddgst": ${ddgst:-false} 00:41:20.244 }, 00:41:20.244 "method": "bdev_nvme_attach_controller" 00:41:20.244 } 00:41:20.244 EOF 00:41:20.244 )") 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:20.244 12:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.244 "params": { 00:41:20.244 "name": "Nvme1", 00:41:20.244 "trtype": "tcp", 00:41:20.244 "traddr": "10.0.0.2", 00:41:20.244 "adrfam": "ipv4", 00:41:20.244 "trsvcid": "4420", 00:41:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.244 "hdgst": false, 00:41:20.244 "ddgst": false 00:41:20.244 }, 00:41:20.244 "method": "bdev_nvme_attach_controller" 00:41:20.244 }' 00:41:20.244 [2024-12-13 12:46:47.381449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:20.244 [2024-12-13 12:46:47.381488] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615225 ] 00:41:20.244 [2024-12-13 12:46:47.455456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:20.244 [2024-12-13 12:46:47.480310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.244 [2024-12-13 12:46:47.480418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.244 [2024-12-13 12:46:47.480419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:20.244 I/O targets: 00:41:20.244 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:20.244 00:41:20.244 00:41:20.244 CUnit - A unit testing framework for C - Version 2.1-3 00:41:20.244 http://cunit.sourceforge.net/ 00:41:20.244 00:41:20.244 00:41:20.244 Suite: bdevio tests on: Nvme1n1 00:41:20.244 Test: blockdev write read block ...passed 00:41:20.244 Test: blockdev write zeroes read block ...passed 00:41:20.244 Test: blockdev write zeroes read no split ...passed 00:41:20.244 Test: blockdev write zeroes read split ...passed 00:41:20.244 Test: blockdev write zeroes read split partial ...passed 00:41:20.244 Test: blockdev reset ...[2024-12-13 12:46:47.736243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:20.244 [2024-12-13 12:46:47.736302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2236630 (9): Bad file descriptor 00:41:20.244 [2024-12-13 12:46:47.830690] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:20.244 passed 00:41:20.244 Test: blockdev write read 8 blocks ...passed 00:41:20.244 Test: blockdev write read size > 128k ...passed 00:41:20.244 Test: blockdev write read invalid size ...passed 00:41:20.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:20.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:20.244 Test: blockdev write read max offset ...passed 00:41:20.504 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:20.504 Test: blockdev writev readv 8 blocks ...passed 00:41:20.504 Test: blockdev writev readv 30 x 1block ...passed 00:41:20.504 Test: blockdev writev readv block ...passed 00:41:20.504 Test: blockdev writev readv size > 128k ...passed 00:41:20.504 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:20.504 Test: blockdev comparev and writev ...[2024-12-13 12:46:48.123715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.123747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.123761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.123769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.124062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.124073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.124088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.124095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.124381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.124391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.124403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.124410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.124698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.124708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:20.504 [2024-12-13 12:46:48.124720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:20.504 [2024-12-13 12:46:48.124727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:20.504 passed 00:41:20.764 Test: blockdev nvme passthru rw ...passed 00:41:20.764 Test: blockdev nvme passthru vendor specific ...[2024-12-13 12:46:48.208149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:20.764 [2024-12-13 12:46:48.208166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:20.764 [2024-12-13 12:46:48.208279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:20.764 [2024-12-13 12:46:48.208288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:20.764 [2024-12-13 12:46:48.208394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:20.764 [2024-12-13 12:46:48.208403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:20.764 [2024-12-13 12:46:48.208504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:20.764 [2024-12-13 12:46:48.208513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:20.764 passed 00:41:20.764 Test: blockdev nvme admin passthru ...passed 00:41:20.764 Test: blockdev copy ...passed 00:41:20.764 00:41:20.764 Run Summary: Type Total Ran Passed Failed Inactive 00:41:20.764 suites 1 1 n/a 0 0 00:41:20.764 tests 23 23 23 0 0 00:41:20.764 asserts 152 152 152 0 n/a 00:41:20.764 00:41:20.764 Elapsed time = 1.293 seconds 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:20.764 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:20.764 rmmod nvme_tcp 00:41:20.764 rmmod nvme_fabrics 00:41:20.764 rmmod nvme_keyring 00:41:21.023 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:21.023 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:21.023 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:21.023 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 615024 ']' 00:41:21.023 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 615024 00:41:21.023 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 615024 ']' 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 615024 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615024 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615024' 00:41:21.024 killing process with pid 615024 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 615024 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 615024 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:21.024 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:23.560 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:23.560 00:41:23.560 real 0m9.891s 00:41:23.560 user 0m8.760s 00:41:23.560 sys 0m5.111s 00:41:23.560 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.560 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.560 ************************************ 00:41:23.560 END TEST nvmf_bdevio 00:41:23.560 ************************************ 00:41:23.560 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:23.560 00:41:23.560 real 4m30.511s 00:41:23.560 user 9m11.151s 00:41:23.560 sys 1m51.137s 00:41:23.560 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.560 12:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:23.560 ************************************ 00:41:23.560 END TEST nvmf_target_core_interrupt_mode 00:41:23.560 ************************************ 00:41:23.560 12:46:50 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:23.560 12:46:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:23.560 12:46:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:23.560 12:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:23.560 ************************************ 00:41:23.560 START TEST nvmf_interrupt 00:41:23.560 ************************************ 00:41:23.560 12:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:23.560 * Looking for test storage... 00:41:23.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:23.560 12:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:23.560 12:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:23.560 12:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:23.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.560 --rc genhtml_branch_coverage=1 00:41:23.560 --rc genhtml_function_coverage=1 00:41:23.560 --rc genhtml_legend=1 00:41:23.560 --rc geninfo_all_blocks=1 00:41:23.560 --rc geninfo_unexecuted_blocks=1 00:41:23.560 00:41:23.560 ' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:23.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.560 --rc genhtml_branch_coverage=1 00:41:23.560 --rc genhtml_function_coverage=1 00:41:23.560 --rc genhtml_legend=1 00:41:23.560 --rc geninfo_all_blocks=1 00:41:23.560 --rc geninfo_unexecuted_blocks=1 00:41:23.560 00:41:23.560 ' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:23.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.560 --rc genhtml_branch_coverage=1 00:41:23.560 --rc genhtml_function_coverage=1 00:41:23.560 --rc genhtml_legend=1 00:41:23.560 --rc geninfo_all_blocks=1 00:41:23.560 --rc geninfo_unexecuted_blocks=1 00:41:23.560 00:41:23.560 ' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:23.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.560 --rc genhtml_branch_coverage=1 00:41:23.560 --rc genhtml_function_coverage=1 00:41:23.560 --rc genhtml_legend=1 00:41:23.560 --rc geninfo_all_blocks=1 00:41:23.560 --rc geninfo_unexecuted_blocks=1 00:41:23.560 00:41:23.560 ' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.560 12:46:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:23.561 12:46:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.139 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:30.140 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:30.140 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:30.140 Found net devices under 0000:af:00.0: cvl_0_0 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:30.140 Found net devices under 0000:af:00.1: cvl_0_1 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:30.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:30.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:41:30.140 00:41:30.140 --- 10.0.0.2 ping statistics --- 00:41:30.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.140 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:30.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:30.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:41:30.140 00:41:30.140 --- 10.0.0.1 ping statistics --- 00:41:30.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:30.140 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=618734 00:41:30.140 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 618734 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 618734 ']' 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:30.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:30.141 12:46:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 [2024-12-13 12:46:56.981539] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:30.141 [2024-12-13 12:46:56.982450] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:30.141 [2024-12-13 12:46:56.982483] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:30.141 [2024-12-13 12:46:57.058502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:30.141 [2024-12-13 12:46:57.080103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:30.141 [2024-12-13 12:46:57.080139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:30.141 [2024-12-13 12:46:57.080146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:30.141 [2024-12-13 12:46:57.080155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:30.141 [2024-12-13 12:46:57.080161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:30.141 [2024-12-13 12:46:57.081200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.141 [2024-12-13 12:46:57.081202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.141 [2024-12-13 12:46:57.143208] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:30.141 [2024-12-13 12:46:57.143682] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:30.141 [2024-12-13 12:46:57.143963] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:30.141 5000+0 records in 00:41:30.141 5000+0 records out 00:41:30.141 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178496 s, 574 MB/s 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 AIO0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 [2024-12-13 12:46:57.278069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:30.141 [2024-12-13 12:46:57.318320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 618734 0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618734 0 idle 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618734 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618734 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 618734 1 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618734 1 idle 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618738 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618738 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=618985 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 618734 0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 618734 0 busy 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:30.141 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:30.142 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:30.142 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:30.142 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:30.142 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:30.142 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:30.142 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618734 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:00.23 reactor_0' 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618734 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:00.23 reactor_0 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:30.400 12:46:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:41:31.336 12:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:41:31.336 12:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:31.336 12:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:31.336 12:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618734 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.44 reactor_0' 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618734 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.44 reactor_0 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:31.595 12:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 618734 1 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 618734 1 busy 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618738 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:01.30 reactor_1' 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618738 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:01.30 reactor_1 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:31.596 12:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 618985 00:41:41.572 Initializing NVMe Controllers 00:41:41.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:41.572 Controller IO queue size 256, less than required. 00:41:41.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:41.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:41.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:41.572 Initialization complete. Launching workers. 00:41:41.572 ======================================================== 00:41:41.572 Latency(us) 00:41:41.572 Device Information : IOPS MiB/s Average min max 00:41:41.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16765.30 65.49 15275.61 5214.27 28938.14 00:41:41.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16973.60 66.30 15085.93 7102.22 26852.84 00:41:41.572 ======================================================== 00:41:41.572 Total : 33738.90 131.79 15180.18 5214.27 28938.14 00:41:41.572 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 618734 0 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618734 0 idle 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:41.572 12:47:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618734 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.22 reactor_0' 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618734 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.22 reactor_0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 618734 1 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618734 1 idle 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618738 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618738 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:41.572 12:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 618734 0 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618734 0 idle 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:43.481 12:47:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618734 root 20 0 128.2g 72960 33792 S 6.7 0.1 0:20.47 reactor_0' 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618734 root 20 0 128.2g 72960 33792 S 6.7 0.1 0:20.47 reactor_0 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 618734 1 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618734 1 idle 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618734 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618734 -w 256 00:41:43.481 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618738 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618738 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:43.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:43.741 rmmod nvme_tcp 00:41:43.741 rmmod nvme_fabrics 00:41:43.741 rmmod nvme_keyring 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 618734 ']' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 618734 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 618734 ']' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 618734 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:43.741 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 618734 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 618734' 00:41:44.000 killing process with pid 618734 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 618734 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 618734 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:44.000 12:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.536 12:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:46.536 00:41:46.536 real 0m22.862s 00:41:46.536 user 0m39.798s 00:41:46.536 sys 0m8.351s 00:41:46.536 12:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:46.536 12:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:46.536 ************************************ 00:41:46.536 END TEST nvmf_interrupt 00:41:46.536 ************************************ 00:41:46.536 00:41:46.536 real 35m27.366s 00:41:46.536 user 86m21.597s 00:41:46.536 sys 10m24.741s 00:41:46.536 12:47:13 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:46.536 12:47:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:46.536 ************************************ 00:41:46.536 END TEST nvmf_tcp 00:41:46.536 ************************************ 00:41:46.536 12:47:13 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:46.536 12:47:13 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:46.536 12:47:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:46.536 12:47:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:46.536 12:47:13 -- common/autotest_common.sh@10 -- # set +x 00:41:46.536 ************************************ 00:41:46.536 START TEST spdkcli_nvmf_tcp 00:41:46.536 ************************************ 00:41:46.536 12:47:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:46.536 * Looking for test storage... 00:41:46.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:46.536 12:47:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:46.536 12:47:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:46.536 12:47:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.536 --rc genhtml_branch_coverage=1 00:41:46.536 --rc genhtml_function_coverage=1 00:41:46.536 --rc genhtml_legend=1 00:41:46.536 --rc geninfo_all_blocks=1 00:41:46.536 --rc geninfo_unexecuted_blocks=1 00:41:46.536 00:41:46.536 ' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.536 --rc genhtml_branch_coverage=1 00:41:46.536 --rc genhtml_function_coverage=1 00:41:46.536 --rc genhtml_legend=1 00:41:46.536 --rc geninfo_all_blocks=1 00:41:46.536 --rc geninfo_unexecuted_blocks=1 00:41:46.536 00:41:46.536 ' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.536 --rc genhtml_branch_coverage=1 00:41:46.536 --rc genhtml_function_coverage=1 00:41:46.536 --rc genhtml_legend=1 00:41:46.536 --rc geninfo_all_blocks=1 00:41:46.536 --rc geninfo_unexecuted_blocks=1 00:41:46.536 00:41:46.536 ' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:46.536 --rc genhtml_branch_coverage=1 00:41:46.536 --rc genhtml_function_coverage=1 00:41:46.536 --rc genhtml_legend=1 00:41:46.536 --rc geninfo_all_blocks=1 00:41:46.536 --rc geninfo_unexecuted_blocks=1 00:41:46.536 00:41:46.536 ' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:46.536 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:46.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=621618 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 621618 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 621618 ']' 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:46.537 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:46.537 [2024-12-13 12:47:14.124963] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:46.537 [2024-12-13 12:47:14.125005] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621618 ] 00:41:46.537 [2024-12-13 12:47:14.198266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:46.537 [2024-12-13 12:47:14.222203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.537 [2024-12-13 12:47:14.222205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.796 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:46.796 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:46.796 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:46.796 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:46.797 12:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:46.797 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:46.797 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:46.797 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:46.797 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:46.797 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:46.797 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:46.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:46.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:46.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:46.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:46.797 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:46.797 ' 00:41:50.085 [2024-12-13 12:47:17.079192] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:51.021 [2024-12-13 12:47:18.419586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:53.557 [2024-12-13 12:47:20.911289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:55.462 [2024-12-13 12:47:23.086057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:57.367 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:57.367 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:57.367 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:57.367 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:57.367 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:57.367 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:57.367 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:57.367 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:57.367 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:57.367 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:57.367 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:57.367 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:57.367 12:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:57.626 12:47:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:57.626 12:47:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:57.884 12:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:57.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:57.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:57.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:57.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:57.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:57.884 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:57.884 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:57.884 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:57.884 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:57.884 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:57.884 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:57.884 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:57.884 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:57.884 ' 00:42:04.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:04.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:04.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:04.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:04.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:04.454 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:04.454 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:04.454 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:04.454 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:04.454 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:04.454 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:04.454 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:04.454 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:04.454 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:04.454 12:47:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:04.454 12:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:04.454 12:47:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621618 ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621618' 00:42:04.454 killing process with pid 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 621618 ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 621618 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621618 ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621618 00:42:04.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (621618) - No such process 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 621618 is not found' 00:42:04.454 Process with pid 621618 is not found 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:04.454 12:47:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:04.455 12:47:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:04.455 00:42:04.455 real 0m17.361s 00:42:04.455 user 0m38.257s 00:42:04.455 sys 0m0.875s 00:42:04.455 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:04.455 12:47:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:04.455 ************************************ 00:42:04.455 END TEST spdkcli_nvmf_tcp 00:42:04.455 ************************************ 00:42:04.455 12:47:31 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:04.455 12:47:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:04.455 12:47:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:04.455 12:47:31 -- common/autotest_common.sh@10 -- # set +x 00:42:04.455 ************************************ 00:42:04.455 START TEST nvmf_identify_passthru 00:42:04.455 ************************************ 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:04.455 * Looking for test storage... 00:42:04.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:04.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.455 --rc genhtml_branch_coverage=1 00:42:04.455 --rc genhtml_function_coverage=1 00:42:04.455 --rc genhtml_legend=1 00:42:04.455 --rc geninfo_all_blocks=1 00:42:04.455 --rc geninfo_unexecuted_blocks=1 00:42:04.455 00:42:04.455 ' 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:04.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.455 --rc genhtml_branch_coverage=1 00:42:04.455 --rc genhtml_function_coverage=1 00:42:04.455 --rc genhtml_legend=1 00:42:04.455 --rc geninfo_all_blocks=1 00:42:04.455 --rc geninfo_unexecuted_blocks=1 00:42:04.455 00:42:04.455 ' 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:04.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.455 --rc genhtml_branch_coverage=1 00:42:04.455 --rc genhtml_function_coverage=1 00:42:04.455 --rc genhtml_legend=1 00:42:04.455 --rc geninfo_all_blocks=1 00:42:04.455 --rc geninfo_unexecuted_blocks=1 00:42:04.455 00:42:04.455 ' 00:42:04.455 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:04.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.455 --rc genhtml_branch_coverage=1 00:42:04.455 --rc genhtml_function_coverage=1 00:42:04.455 --rc genhtml_legend=1 00:42:04.455 --rc geninfo_all_blocks=1 00:42:04.455 --rc geninfo_unexecuted_blocks=1 00:42:04.455 00:42:04.455 ' 00:42:04.455 12:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.455 12:47:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.455 12:47:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.455 12:47:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.455 12:47:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:04.455 12:47:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:04.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:04.455 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:04.455 12:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.455 12:47:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.455 12:47:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.456 12:47:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.456 12:47:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.456 12:47:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:04.456 12:47:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.456 12:47:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.456 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:04.456 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:04.456 12:47:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:04.456 12:47:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:09.732 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:09.733 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:09.733 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:09.733 12:47:36 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:09.733 Found net devices under 0000:af:00.0: cvl_0_0 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:09.733 Found net devices under 0000:af:00.1: cvl_0_1 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:09.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:09.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:42:09.733 00:42:09.733 --- 10.0.0.2 ping statistics --- 00:42:09.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:09.733 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:09.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:09.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:42:09.733 00:42:09.733 --- 10.0.0.1 ping statistics --- 00:42:09.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:09.733 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:09.733 12:47:37 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:42:09.733 12:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:09.733 12:47:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:13.926 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:13.926 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:13.926 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:13.926 12:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=628717 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:18.118 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 628717 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 628717 ']' 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:18.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:18.118 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.118 [2024-12-13 12:47:45.783314] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:18.118 [2024-12-13 12:47:45.783363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:18.377 [2024-12-13 12:47:45.859975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:18.377 [2024-12-13 12:47:45.884416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:18.377 [2024-12-13 12:47:45.884455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:18.377 [2024-12-13 12:47:45.884465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:18.377 [2024-12-13 12:47:45.884471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:18.377 [2024-12-13 12:47:45.884493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:18.377 [2024-12-13 12:47:45.885954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:18.377 [2024-12-13 12:47:45.886062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:18.377 [2024-12-13 12:47:45.886191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.377 [2024-12-13 12:47:45.886191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:18.377 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:18.377 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:18.377 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:18.377 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.377 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.377 INFO: Log level set to 20 00:42:18.377 INFO: Requests: 00:42:18.377 { 00:42:18.377 "jsonrpc": "2.0", 00:42:18.377 "method": "nvmf_set_config", 00:42:18.377 "id": 1, 00:42:18.377 "params": { 00:42:18.377 "admin_cmd_passthru": { 00:42:18.377 "identify_ctrlr": true 00:42:18.377 } 00:42:18.377 } 00:42:18.377 } 00:42:18.377 00:42:18.377 INFO: response: 00:42:18.377 { 00:42:18.377 "jsonrpc": "2.0", 00:42:18.377 "id": 1, 00:42:18.377 "result": true 00:42:18.377 } 00:42:18.377 00:42:18.378 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.378 12:47:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:18.378 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.378 12:47:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.378 INFO: Setting log level to 20 00:42:18.378 INFO: Setting log level to 20 00:42:18.378 INFO: Log level set to 20 00:42:18.378 INFO: Log level set to 20 00:42:18.378 INFO: Requests: 00:42:18.378 { 00:42:18.378 "jsonrpc": "2.0", 00:42:18.378 "method": "framework_start_init", 00:42:18.378 "id": 1 00:42:18.378 } 00:42:18.378 00:42:18.378 INFO: Requests: 00:42:18.378 { 00:42:18.378 "jsonrpc": "2.0", 00:42:18.378 "method": "framework_start_init", 00:42:18.378 "id": 1 00:42:18.378 } 00:42:18.378 00:42:18.378 [2024-12-13 12:47:46.016832] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:18.378 INFO: response: 00:42:18.378 { 00:42:18.378 "jsonrpc": "2.0", 00:42:18.378 "id": 1, 00:42:18.378 "result": true 00:42:18.378 } 00:42:18.378 00:42:18.378 INFO: response: 00:42:18.378 { 00:42:18.378 "jsonrpc": "2.0", 00:42:18.378 "id": 1, 00:42:18.378 "result": true 00:42:18.378 } 00:42:18.378 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.378 12:47:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.378 INFO: Setting log level to 40 00:42:18.378 INFO: Setting log level to 40 00:42:18.378 INFO: Setting log level to 40 00:42:18.378 [2024-12-13 12:47:46.030114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.378 12:47:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:18.378 12:47:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.378 12:47:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.667 Nvme0n1 00:42:21.667 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.667 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.668 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.668 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.668 [2024-12-13 12:47:48.933140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.668 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.668 [ 00:42:21.668 { 00:42:21.668 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:21.668 "subtype": "Discovery", 00:42:21.668 "listen_addresses": [], 00:42:21.668 "allow_any_host": true, 00:42:21.668 "hosts": [] 00:42:21.668 }, 00:42:21.668 { 00:42:21.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:21.668 "subtype": "NVMe", 00:42:21.668 "listen_addresses": [ 00:42:21.668 { 00:42:21.668 "trtype": "TCP", 00:42:21.668 "adrfam": "IPv4", 00:42:21.668 "traddr": "10.0.0.2", 00:42:21.668 "trsvcid": "4420" 00:42:21.668 } 00:42:21.668 ], 00:42:21.668 "allow_any_host": true, 00:42:21.668 "hosts": [], 00:42:21.668 "serial_number": "SPDK00000000000001", 00:42:21.668 "model_number": "SPDK bdev Controller", 00:42:21.668 "max_namespaces": 1, 00:42:21.668 "min_cntlid": 1, 00:42:21.668 "max_cntlid": 65519, 00:42:21.668 "namespaces": [ 00:42:21.668 { 00:42:21.668 "nsid": 1, 00:42:21.668 "bdev_name": "Nvme0n1", 00:42:21.668 "name": "Nvme0n1", 00:42:21.668 "nguid": "4FF70CED2601414F91451FCE26D434E8", 00:42:21.668 "uuid": "4ff70ced-2601-414f-9145-1fce26d434e8" 00:42:21.668 } 00:42:21.668 ] 00:42:21.668 } 00:42:21.668 ] 00:42:21.668 12:47:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.668 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:21.668 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:21.668 12:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:21.668 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:21.668 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:21.668 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:21.668 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:21.927 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:21.927 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.927 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:21.927 12:47:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:21.927 rmmod nvme_tcp 00:42:21.927 rmmod nvme_fabrics 00:42:21.927 rmmod nvme_keyring 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 628717 ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 628717 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 628717 ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 628717 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 628717 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 628717' 00:42:21.927 killing process with pid 628717 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 628717 00:42:21.927 12:47:49 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 628717 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:23.833 12:47:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.833 12:47:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:23.833 12:47:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.739 12:47:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:25.739 00:42:25.739 real 0m21.893s 00:42:25.739 user 0m28.317s 00:42:25.739 sys 0m5.339s 00:42:25.739 12:47:53 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.739 12:47:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.739 ************************************ 00:42:25.740 END TEST nvmf_identify_passthru 00:42:25.740 ************************************ 00:42:25.740 12:47:53 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:25.740 12:47:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:25.740 12:47:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:25.740 12:47:53 -- common/autotest_common.sh@10 -- # set +x 00:42:25.740 ************************************ 00:42:25.740 START TEST nvmf_dif 00:42:25.740 ************************************ 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:25.740 * Looking for test storage... 00:42:25.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:25.740 12:47:53 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.740 --rc genhtml_branch_coverage=1 00:42:25.740 --rc genhtml_function_coverage=1 00:42:25.740 --rc genhtml_legend=1 00:42:25.740 --rc geninfo_all_blocks=1 00:42:25.740 --rc geninfo_unexecuted_blocks=1 00:42:25.740 00:42:25.740 ' 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.740 --rc genhtml_branch_coverage=1 00:42:25.740 --rc genhtml_function_coverage=1 00:42:25.740 --rc genhtml_legend=1 00:42:25.740 --rc geninfo_all_blocks=1 00:42:25.740 --rc geninfo_unexecuted_blocks=1 00:42:25.740 00:42:25.740 ' 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.740 --rc genhtml_branch_coverage=1 00:42:25.740 --rc genhtml_function_coverage=1 00:42:25.740 --rc genhtml_legend=1 00:42:25.740 --rc geninfo_all_blocks=1 00:42:25.740 --rc geninfo_unexecuted_blocks=1 00:42:25.740 00:42:25.740 ' 00:42:25.740 12:47:53 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.740 --rc genhtml_branch_coverage=1 00:42:25.740 --rc genhtml_function_coverage=1 00:42:25.740 --rc genhtml_legend=1 00:42:25.740 --rc geninfo_all_blocks=1 00:42:25.740 --rc geninfo_unexecuted_blocks=1 00:42:25.740 00:42:25.740 ' 00:42:25.740 12:47:53 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.740 12:47:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:25.999 12:47:53 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:25.999 12:47:53 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:25.999 12:47:53 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.999 12:47:53 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.999 12:47:53 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.999 12:47:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.000 12:47:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.000 12:47:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.000 12:47:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:26.000 12:47:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:26.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:26.000 12:47:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:26.000 12:47:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:26.000 12:47:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:26.000 12:47:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:26.000 12:47:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:26.000 12:47:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:26.000 12:47:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:26.000 12:47:53 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:26.000 12:47:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:32.568 12:47:58 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:32.568 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:32.568 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:32.568 Found net devices under 0000:af:00.0: cvl_0_0 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:32.568 Found net devices under 0000:af:00.1: cvl_0_1 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:32.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:32.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:42:32.568 00:42:32.568 --- 10.0.0.2 ping statistics --- 00:42:32.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:32.568 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:42:32.568 12:47:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:32.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:32.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:42:32.568 00:42:32.568 --- 10.0.0.1 ping statistics --- 00:42:32.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:32.569 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:42:32.569 12:47:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:32.569 12:47:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:32.569 12:47:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:32.569 12:47:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:34.472 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:34.472 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:34.472 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:34.472 12:48:01 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:34.472 12:48:01 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:34.472 12:48:01 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:34.472 12:48:01 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:34.472 12:48:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:34.472 12:48:01 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:34.472 12:48:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:34.472 12:48:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:34.472 12:48:02 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.472 12:48:02 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=634209 00:42:34.472 12:48:02 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:34.472 12:48:02 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 634209 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 634209 ']' 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:34.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.472 12:48:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.472 [2024-12-13 12:48:02.091146] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:34.472 [2024-12-13 12:48:02.091191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:34.472 [2024-12-13 12:48:02.165862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.732 [2024-12-13 12:48:02.187419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:34.733 [2024-12-13 12:48:02.187453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:34.733 [2024-12-13 12:48:02.187459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:34.733 [2024-12-13 12:48:02.187469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:34.733 [2024-12-13 12:48:02.187474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:34.733 [2024-12-13 12:48:02.187982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:34.733 12:48:02 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 12:48:02 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:34.733 12:48:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:34.733 12:48:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 [2024-12-13 12:48:02.318661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.733 12:48:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 ************************************ 00:42:34.733 START TEST fio_dif_1_default 00:42:34.733 ************************************ 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 bdev_null0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.733 [2024-12-13 12:48:02.382955] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:34.733 { 00:42:34.733 "params": { 00:42:34.733 "name": "Nvme$subsystem", 00:42:34.733 "trtype": "$TEST_TRANSPORT", 00:42:34.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:34.733 "adrfam": "ipv4", 00:42:34.733 "trsvcid": "$NVMF_PORT", 00:42:34.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:34.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:34.733 "hdgst": ${hdgst:-false}, 00:42:34.733 "ddgst": ${ddgst:-false} 00:42:34.733 }, 00:42:34.733 "method": "bdev_nvme_attach_controller" 00:42:34.733 } 00:42:34.733 EOF 00:42:34.733 )") 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:34.733 "params": { 00:42:34.733 "name": "Nvme0", 00:42:34.733 "trtype": "tcp", 00:42:34.733 "traddr": "10.0.0.2", 00:42:34.733 "adrfam": "ipv4", 00:42:34.733 "trsvcid": "4420", 00:42:34.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:34.733 "hdgst": false, 00:42:34.733 "ddgst": false 00:42:34.733 }, 00:42:34.733 "method": "bdev_nvme_attach_controller" 00:42:34.733 }' 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:34.733 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:35.022 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:35.022 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:35.022 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:35.022 12:48:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.286 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:35.286 fio-3.35 00:42:35.286 Starting 1 thread 00:42:47.599 00:42:47.599 filename0: (groupid=0, jobs=1): err= 0: pid=634579: Fri Dec 13 12:48:13 2024 00:42:47.599 read: IOPS=195, BW=780KiB/s (799kB/s)(7808KiB/10006msec) 00:42:47.599 slat (nsec): min=5951, max=26909, avg=6479.29, stdev=871.92 00:42:47.599 clat (usec): min=415, max=43460, avg=20485.12, stdev=20461.12 00:42:47.599 lat (usec): min=421, max=43487, avg=20491.59, stdev=20461.09 00:42:47.599 clat percentiles (usec): 00:42:47.599 | 1.00th=[ 469], 5.00th=[ 502], 10.00th=[ 570], 20.00th=[ 611], 00:42:47.599 | 30.00th=[ 619], 40.00th=[ 635], 50.00th=[ 717], 60.00th=[41157], 00:42:47.599 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:42:47.599 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:42:47.599 | 99.99th=[43254] 00:42:47.599 bw ( KiB/s): min= 670, max= 896, per=99.83%, avg=779.10, stdev=56.38, samples=20 00:42:47.599 iops : min= 167, max= 224, avg=194.75, stdev=14.15, samples=20 00:42:47.599 lat (usec) : 500=4.71%, 750=46.47%, 1000=0.05% 00:42:47.599 lat (msec) : 2=0.20%, 50=48.57% 00:42:47.599 cpu : usr=93.16%, sys=6.54%, ctx=14, majf=0, minf=0 00:42:47.599 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.599 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:47.599 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:47.599 00:42:47.599 Run status group 0 (all jobs): 00:42:47.599 READ: bw=780KiB/s (799kB/s), 780KiB/s-780KiB/s (799kB/s-799kB/s), io=7808KiB (7995kB), run=10006-10006msec 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.599 00:42:47.599 real 0m11.282s 00:42:47.599 user 0m16.004s 00:42:47.599 sys 0m0.999s 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 ************************************ 00:42:47.599 END TEST fio_dif_1_default 00:42:47.599 ************************************ 00:42:47.599 12:48:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:47.599 12:48:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:47.599 12:48:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 ************************************ 00:42:47.599 START TEST fio_dif_1_multi_subsystems 00:42:47.599 ************************************ 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 bdev_null0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.599 [2024-12-13 12:48:13.743281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:47.599 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.600 bdev_null1 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:47.600 { 00:42:47.600 "params": { 00:42:47.600 "name": "Nvme$subsystem", 00:42:47.600 "trtype": "$TEST_TRANSPORT", 00:42:47.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:47.600 "adrfam": "ipv4", 00:42:47.600 "trsvcid": "$NVMF_PORT", 00:42:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:47.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:47.600 "hdgst": ${hdgst:-false}, 00:42:47.600 "ddgst": ${ddgst:-false} 00:42:47.600 }, 00:42:47.600 "method": "bdev_nvme_attach_controller" 00:42:47.600 } 00:42:47.600 EOF 00:42:47.600 )") 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:47.600 { 00:42:47.600 "params": { 00:42:47.600 "name": "Nvme$subsystem", 00:42:47.600 "trtype": "$TEST_TRANSPORT", 00:42:47.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:47.600 "adrfam": "ipv4", 00:42:47.600 "trsvcid": "$NVMF_PORT", 00:42:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:47.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:47.600 "hdgst": ${hdgst:-false}, 00:42:47.600 "ddgst": ${ddgst:-false} 00:42:47.600 }, 00:42:47.600 "method": "bdev_nvme_attach_controller" 00:42:47.600 } 00:42:47.600 EOF 00:42:47.600 )") 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:47.600 "params": { 00:42:47.600 "name": "Nvme0", 00:42:47.600 "trtype": "tcp", 00:42:47.600 "traddr": "10.0.0.2", 00:42:47.600 "adrfam": "ipv4", 00:42:47.600 "trsvcid": "4420", 00:42:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:47.600 "hdgst": false, 00:42:47.600 "ddgst": false 00:42:47.600 }, 00:42:47.600 "method": "bdev_nvme_attach_controller" 00:42:47.600 },{ 00:42:47.600 "params": { 00:42:47.600 "name": "Nvme1", 00:42:47.600 "trtype": "tcp", 00:42:47.600 "traddr": "10.0.0.2", 00:42:47.600 "adrfam": "ipv4", 00:42:47.600 "trsvcid": "4420", 00:42:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:47.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:47.600 "hdgst": false, 00:42:47.600 "ddgst": false 00:42:47.600 }, 00:42:47.600 "method": "bdev_nvme_attach_controller" 00:42:47.600 }' 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:47.600 12:48:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.600 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:47.600 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:47.600 fio-3.35 00:42:47.600 Starting 2 threads 00:42:57.561 00:42:57.561 filename0: (groupid=0, jobs=1): err= 0: pid=636894: Fri Dec 13 12:48:24 2024 00:42:57.561 read: IOPS=210, BW=843KiB/s (863kB/s)(8432KiB/10007msec) 00:42:57.561 slat (nsec): min=5947, max=38897, avg=8167.52, stdev=4978.43 00:42:57.561 clat (usec): min=385, max=42575, avg=18963.29, stdev=20427.21 00:42:57.561 lat (usec): min=393, max=42582, avg=18971.46, stdev=20425.95 00:42:57.561 clat percentiles (usec): 00:42:57.561 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 437], 00:42:57.561 | 30.00th=[ 490], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[41157], 00:42:57.561 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:42:57.561 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:57.561 | 99.99th=[42730] 00:42:57.561 bw ( KiB/s): min= 704, max= 1024, per=66.98%, avg=825.26, stdev=97.52, samples=19 00:42:57.561 iops : min= 176, max= 256, avg=206.32, stdev=24.38, samples=19 00:42:57.561 lat (usec) : 500=33.35%, 750=21.68% 00:42:57.561 lat (msec) : 50=44.97% 00:42:57.561 cpu : usr=97.71%, sys=2.02%, ctx=14, majf=0, minf=74 00:42:57.561 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.561 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.561 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:57.561 filename1: (groupid=0, jobs=1): err= 0: pid=636895: Fri Dec 13 12:48:24 2024 00:42:57.561 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10029msec) 00:42:57.562 slat (nsec): min=6023, max=44695, avg=9487.36, stdev=5654.46 00:42:57.562 clat (usec): min=439, max=42117, avg=40902.67, stdev=2607.86 00:42:57.562 lat (usec): min=446, max=42139, avg=40912.16, stdev=2607.83 00:42:57.562 clat percentiles (usec): 00:42:57.562 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:57.562 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:57.562 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:42:57.562 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:57.562 | 99.99th=[42206] 00:42:57.562 bw ( KiB/s): min= 384, max= 416, per=31.67%, avg=390.40, stdev=13.13, samples=20 00:42:57.562 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:42:57.562 lat (usec) : 500=0.41% 00:42:57.562 lat (msec) : 50=99.59% 00:42:57.562 cpu : usr=97.50%, sys=2.23%, ctx=14, majf=0, minf=132 00:42:57.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.562 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:57.562 00:42:57.562 Run status group 0 (all jobs): 00:42:57.562 READ: bw=1232KiB/s (1261kB/s), 391KiB/s-843KiB/s (400kB/s-863kB/s), io=12.1MiB (12.6MB), run=10007-10029msec 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.562 00:42:57.562 real 0m11.533s 00:42:57.562 user 0m26.463s 00:42:57.562 sys 0m0.798s 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:57.562 12:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.562 ************************************ 00:42:57.562 END TEST fio_dif_1_multi_subsystems 00:42:57.562 ************************************ 00:42:57.819 12:48:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:57.819 12:48:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:57.819 12:48:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:57.819 12:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:57.819 ************************************ 00:42:57.819 START TEST fio_dif_rand_params 00:42:57.819 ************************************ 00:42:57.819 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:57.819 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.820 bdev_null0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:57.820 [2024-12-13 12:48:25.350529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:57.820 { 00:42:57.820 "params": { 00:42:57.820 "name": "Nvme$subsystem", 00:42:57.820 "trtype": "$TEST_TRANSPORT", 00:42:57.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:57.820 "adrfam": "ipv4", 00:42:57.820 "trsvcid": "$NVMF_PORT", 00:42:57.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:57.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:57.820 "hdgst": ${hdgst:-false}, 00:42:57.820 "ddgst": ${ddgst:-false} 00:42:57.820 }, 00:42:57.820 "method": "bdev_nvme_attach_controller" 00:42:57.820 } 00:42:57.820 EOF 00:42:57.820 )") 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:57.820 "params": { 00:42:57.820 "name": "Nvme0", 00:42:57.820 "trtype": "tcp", 00:42:57.820 "traddr": "10.0.0.2", 00:42:57.820 "adrfam": "ipv4", 00:42:57.820 "trsvcid": "4420", 00:42:57.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:57.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:57.820 "hdgst": false, 00:42:57.820 "ddgst": false 00:42:57.820 }, 00:42:57.820 "method": "bdev_nvme_attach_controller" 00:42:57.820 }' 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:57.820 12:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:58.078 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:58.078 ... 00:42:58.078 fio-3.35 00:42:58.078 Starting 3 threads 00:43:04.640 00:43:04.640 filename0: (groupid=0, jobs=1): err= 0: pid=638807: Fri Dec 13 12:48:31 2024 00:43:04.640 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5045msec) 00:43:04.640 slat (nsec): min=6347, max=27812, avg=10668.02, stdev=2391.20 00:43:04.640 clat (usec): min=3303, max=51313, avg=9461.26, stdev=6607.17 00:43:04.640 lat (usec): min=3310, max=51330, avg=9471.93, stdev=6607.13 00:43:04.640 clat percentiles (usec): 00:43:04.640 | 1.00th=[ 3687], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7570], 00:43:04.640 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:43:04.640 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10552], 00:43:04.640 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:43:04.640 | 99.99th=[51119] 00:43:04.640 bw ( KiB/s): min=30464, max=46336, per=34.00%, avg=40704.00, stdev=5334.53, samples=10 00:43:04.640 iops : min= 238, max= 362, avg=318.00, stdev=41.68, samples=10 00:43:04.640 lat (msec) : 4=2.51%, 10=86.94%, 20=7.97%, 50=2.01%, 100=0.56% 00:43:04.640 cpu : usr=95.60%, sys=4.06%, ctx=13, majf=0, minf=2 00:43:04.640 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:04.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.640 issued rwts: total=1593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.640 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:04.640 filename0: (groupid=0, jobs=1): err= 0: pid=638808: Fri Dec 13 12:48:31 2024 00:43:04.640 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(188MiB/5002msec) 00:43:04.640 slat (nsec): min=6309, max=57785, avg=12889.41, stdev=4272.67 00:43:04.640 clat (usec): min=3319, max=52306, avg=9948.22, stdev=5407.44 00:43:04.640 lat (usec): min=3325, max=52318, avg=9961.11, stdev=5407.68 00:43:04.640 clat percentiles (usec): 00:43:04.640 | 1.00th=[ 3720], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 7373], 00:43:04.640 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10028], 00:43:04.640 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11994], 95.00th=[12780], 00:43:04.640 | 99.00th=[48497], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:43:04.640 | 99.99th=[52167] 00:43:04.640 bw ( KiB/s): min=32512, max=45824, per=31.56%, avg=37774.22, stdev=5057.40, samples=9 00:43:04.640 iops : min= 254, max= 358, avg=295.11, stdev=39.51, samples=9 00:43:04.640 lat (msec) : 4=1.59%, 10=58.70%, 20=38.11%, 50=0.93%, 100=0.66% 00:43:04.641 cpu : usr=87.52%, sys=7.94%, ctx=832, majf=0, minf=9 00:43:04.641 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.641 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:04.641 filename0: (groupid=0, jobs=1): err= 0: pid=638809: Fri Dec 13 12:48:31 2024 00:43:04.641 read: IOPS=323, BW=40.5MiB/s (42.4MB/s)(202MiB/5003msec) 00:43:04.641 slat (nsec): min=6281, max=38461, avg=11012.82, stdev=2728.61 00:43:04.641 clat (usec): min=3318, max=52361, avg=9255.36, stdev=5864.56 00:43:04.641 lat (usec): min=3325, max=52374, avg=9266.37, stdev=5864.76 00:43:04.641 clat percentiles (usec): 00:43:04.641 | 1.00th=[ 3556], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 7046], 00:43:04.641 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:43:04.641 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:43:04.641 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:43:04.641 | 99.99th=[52167] 00:43:04.641 bw ( KiB/s): min=32768, max=49920, per=34.53%, avg=41329.78, stdev=6865.75, samples=9 00:43:04.641 iops : min= 256, max= 390, avg=322.89, stdev=53.64, samples=9 00:43:04.641 lat (msec) : 4=2.59%, 10=82.40%, 20=13.16%, 50=0.68%, 100=1.17% 00:43:04.641 cpu : usr=96.24%, sys=3.42%, ctx=7, majf=0, minf=0 00:43:04.641 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:04.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.641 issued rwts: total=1619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:04.641 00:43:04.641 Run status group 0 (all jobs): 00:43:04.641 READ: bw=117MiB/s (123MB/s), 37.6MiB/s-40.5MiB/s (39.5MB/s-42.4MB/s), io=590MiB (618MB), run=5002-5045msec 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 bdev_null0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 [2024-12-13 12:48:31.751219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 bdev_null1 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 bdev_null2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:04.641 { 00:43:04.641 "params": { 00:43:04.641 "name": "Nvme$subsystem", 00:43:04.641 "trtype": "$TEST_TRANSPORT", 00:43:04.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:04.641 "adrfam": "ipv4", 00:43:04.641 "trsvcid": "$NVMF_PORT", 00:43:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:04.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:04.641 "hdgst": ${hdgst:-false}, 00:43:04.641 "ddgst": ${ddgst:-false} 00:43:04.641 }, 00:43:04.641 "method": "bdev_nvme_attach_controller" 00:43:04.641 } 00:43:04.641 EOF 00:43:04.641 )") 00:43:04.641 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:04.642 { 00:43:04.642 "params": { 00:43:04.642 "name": "Nvme$subsystem", 00:43:04.642 "trtype": "$TEST_TRANSPORT", 00:43:04.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:04.642 "adrfam": "ipv4", 00:43:04.642 "trsvcid": "$NVMF_PORT", 00:43:04.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:04.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:04.642 "hdgst": ${hdgst:-false}, 00:43:04.642 "ddgst": ${ddgst:-false} 00:43:04.642 }, 00:43:04.642 "method": "bdev_nvme_attach_controller" 00:43:04.642 } 00:43:04.642 EOF 00:43:04.642 )") 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:04.642 { 00:43:04.642 "params": { 00:43:04.642 "name": "Nvme$subsystem", 00:43:04.642 "trtype": "$TEST_TRANSPORT", 00:43:04.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:04.642 "adrfam": "ipv4", 00:43:04.642 "trsvcid": "$NVMF_PORT", 00:43:04.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:04.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:04.642 "hdgst": ${hdgst:-false}, 00:43:04.642 "ddgst": ${ddgst:-false} 00:43:04.642 }, 00:43:04.642 "method": "bdev_nvme_attach_controller" 00:43:04.642 } 00:43:04.642 EOF 00:43:04.642 )") 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:04.642 "params": { 00:43:04.642 "name": "Nvme0", 00:43:04.642 "trtype": "tcp", 00:43:04.642 "traddr": "10.0.0.2", 00:43:04.642 "adrfam": "ipv4", 00:43:04.642 "trsvcid": "4420", 00:43:04.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:04.642 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:04.642 "hdgst": false, 00:43:04.642 "ddgst": false 00:43:04.642 }, 00:43:04.642 "method": "bdev_nvme_attach_controller" 00:43:04.642 },{ 00:43:04.642 "params": { 00:43:04.642 "name": "Nvme1", 00:43:04.642 "trtype": "tcp", 00:43:04.642 "traddr": "10.0.0.2", 00:43:04.642 "adrfam": "ipv4", 00:43:04.642 "trsvcid": "4420", 00:43:04.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:04.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:04.642 "hdgst": false, 00:43:04.642 "ddgst": false 00:43:04.642 }, 00:43:04.642 "method": "bdev_nvme_attach_controller" 00:43:04.642 },{ 00:43:04.642 "params": { 00:43:04.642 "name": "Nvme2", 00:43:04.642 "trtype": "tcp", 00:43:04.642 "traddr": "10.0.0.2", 00:43:04.642 "adrfam": "ipv4", 00:43:04.642 "trsvcid": "4420", 00:43:04.642 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:04.642 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:04.642 "hdgst": false, 00:43:04.642 "ddgst": false 00:43:04.642 }, 00:43:04.642 "method": "bdev_nvme_attach_controller" 00:43:04.642 }' 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:04.642 12:48:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:04.642 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:04.642 ... 00:43:04.642 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:04.642 ... 00:43:04.642 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:04.642 ... 00:43:04.642 fio-3.35 00:43:04.642 Starting 24 threads 00:43:16.838 00:43:16.838 filename0: (groupid=0, jobs=1): err= 0: pid=639864: Fri Dec 13 12:48:43 2024 00:43:16.838 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10009msec) 00:43:16.838 slat (usec): min=7, max=101, avg=19.00, stdev=11.25 00:43:16.838 clat (usec): min=21698, max=33404, avg=30275.35, stdev=638.83 00:43:16.838 lat (usec): min=21718, max=33506, avg=30294.35, stdev=636.48 00:43:16.838 clat percentiles (usec): 00:43:16.838 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:16.838 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.838 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:43:16.838 | 99.00th=[30540], 99.50th=[30802], 99.90th=[32900], 99.95th=[33162], 00:43:16.838 | 99.99th=[33424] 00:43:16.838 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=64.93, samples=19 00:43:16.838 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:43:16.838 lat (msec) : 50=100.00% 00:43:16.838 cpu : usr=98.43%, sys=1.18%, ctx=14, majf=0, minf=9 00:43:16.838 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.838 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.838 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.838 filename0: (groupid=0, jobs=1): err= 0: pid=639865: Fri Dec 13 12:48:43 2024 00:43:16.838 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:43:16.838 slat (usec): min=7, max=107, avg=36.91, stdev=19.74 00:43:16.838 clat (usec): min=10408, max=30928, avg=29999.43, stdev=1395.38 00:43:16.838 lat (usec): min=10424, max=30951, avg=30036.34, stdev=1396.16 00:43:16.838 clat percentiles (usec): 00:43:16.838 | 1.00th=[27919], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:43:16.838 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:16.838 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.838 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:43:16.838 | 99.99th=[30802] 00:43:16.838 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2105.80, stdev=65.57, samples=20 00:43:16.838 iops : min= 512, max= 545, avg=526.45, stdev=16.39, samples=20 00:43:16.838 lat (msec) : 20=0.61%, 50=99.39% 00:43:16.838 cpu : usr=98.55%, sys=1.01%, ctx=38, majf=0, minf=9 00:43:16.838 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.838 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename0: (groupid=0, jobs=1): err= 0: pid=639866: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10012msec) 00:43:16.839 slat (nsec): min=6892, max=49093, avg=23974.99, stdev=7999.40 00:43:16.839 clat (usec): min=15770, max=43511, avg=30204.53, stdev=1177.30 00:43:16.839 lat (usec): min=15786, max=43528, avg=30228.50, stdev=1177.24 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.839 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[43254], 99.95th=[43254], 00:43:16.839 | 99.99th=[43254] 00:43:16.839 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2095.32, stdev=76.07, samples=19 00:43:16.839 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.839 lat (msec) : 20=0.30%, 50=99.70% 00:43:16.839 cpu : usr=98.28%, sys=1.33%, ctx=13, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename0: (groupid=0, jobs=1): err= 0: pid=639867: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:43:16.839 slat (nsec): min=5408, max=45105, avg=20186.02, stdev=6319.17 00:43:16.839 clat (usec): min=17940, max=54651, avg=30302.97, stdev=1510.25 00:43:16.839 lat (usec): min=17970, max=54672, avg=30323.15, stdev=1509.85 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:16.839 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[54789], 99.95th=[54789], 00:43:16.839 | 99.99th=[54789] 00:43:16.839 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.839 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.839 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:43:16.839 cpu : usr=98.50%, sys=1.11%, ctx=16, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename0: (groupid=0, jobs=1): err= 0: pid=639868: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:43:16.839 slat (nsec): min=5732, max=41770, avg=20821.73, stdev=6031.42 00:43:16.839 clat (usec): min=17851, max=54771, avg=30306.95, stdev=1519.63 00:43:16.839 lat (usec): min=17867, max=54785, avg=30327.77, stdev=1519.03 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:16.839 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[54789], 99.95th=[54789], 00:43:16.839 | 99.99th=[54789] 00:43:16.839 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.839 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.839 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:43:16.839 cpu : usr=98.69%, sys=0.92%, ctx=14, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename0: (groupid=0, jobs=1): err= 0: pid=639869: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=532, BW=2131KiB/s (2182kB/s)(20.8MiB/10002msec) 00:43:16.839 slat (nsec): min=7567, max=44031, avg=13152.38, stdev=4163.59 00:43:16.839 clat (usec): min=2183, max=30835, avg=29924.32, stdev=3016.29 00:43:16.839 lat (usec): min=2199, max=30848, avg=29937.47, stdev=3015.96 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[ 7898], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:43:16.839 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:43:16.839 | 99.99th=[30802] 00:43:16.839 bw ( KiB/s): min= 2048, max= 2565, per=4.21%, avg=2129.11, stdev=123.24, samples=19 00:43:16.839 iops : min= 512, max= 641, avg=532.26, stdev=30.76, samples=19 00:43:16.839 lat (msec) : 4=0.30%, 10=0.90%, 20=0.60%, 50=98.20% 00:43:16.839 cpu : usr=98.53%, sys=1.10%, ctx=12, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename0: (groupid=0, jobs=1): err= 0: pid=639870: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10011msec) 00:43:16.839 slat (nsec): min=8298, max=49630, avg=22422.14, stdev=6928.31 00:43:16.839 clat (usec): min=21645, max=43633, avg=30245.34, stdev=713.16 00:43:16.839 lat (usec): min=21672, max=43654, avg=30267.77, stdev=712.52 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.839 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[35390], 99.95th=[35390], 00:43:16.839 | 99.99th=[43779] 00:43:16.839 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=64.93, samples=19 00:43:16.839 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:43:16.839 lat (msec) : 50=100.00% 00:43:16.839 cpu : usr=98.56%, sys=1.07%, ctx=11, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename0: (groupid=0, jobs=1): err= 0: pid=639871: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10011msec) 00:43:16.839 slat (nsec): min=6889, max=53121, avg=24735.87, stdev=7787.59 00:43:16.839 clat (usec): min=15659, max=43434, avg=30215.97, stdev=1178.10 00:43:16.839 lat (usec): min=15686, max=43451, avg=30240.70, stdev=1177.74 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.839 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[43254], 99.95th=[43254], 00:43:16.839 | 99.99th=[43254] 00:43:16.839 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2095.32, stdev=76.07, samples=19 00:43:16.839 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.839 lat (msec) : 20=0.30%, 50=99.70% 00:43:16.839 cpu : usr=98.63%, sys=0.99%, ctx=15, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename1: (groupid=0, jobs=1): err= 0: pid=639872: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:43:16.839 slat (nsec): min=5319, max=91535, avg=37065.41, stdev=20814.60 00:43:16.839 clat (usec): min=11432, max=32403, avg=29991.62, stdev=1303.61 00:43:16.839 lat (usec): min=11440, max=32418, avg=30028.69, stdev=1305.82 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[27919], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:43:16.839 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[31589], 00:43:16.839 | 99.99th=[32375] 00:43:16.839 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2105.60, stdev=65.33, samples=20 00:43:16.839 iops : min= 512, max= 544, avg=526.40, stdev=16.33, samples=20 00:43:16.839 lat (msec) : 20=0.57%, 50=99.43% 00:43:16.839 cpu : usr=98.61%, sys=1.01%, ctx=13, majf=0, minf=9 00:43:16.839 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.839 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.839 filename1: (groupid=0, jobs=1): err= 0: pid=639873: Fri Dec 13 12:48:43 2024 00:43:16.839 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:43:16.839 slat (nsec): min=7877, max=50275, avg=24330.77, stdev=7903.81 00:43:16.839 clat (usec): min=21620, max=34899, avg=30208.89, stdev=618.17 00:43:16.839 lat (usec): min=21634, max=34923, avg=30233.22, stdev=618.01 00:43:16.839 clat percentiles (usec): 00:43:16.839 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.839 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.839 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.839 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:43:16.839 | 99.99th=[34866] 00:43:16.839 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2101.89, stdev=64.93, samples=19 00:43:16.840 iops : min= 512, max= 544, avg=525.47, stdev=16.23, samples=19 00:43:16.840 lat (msec) : 50=100.00% 00:43:16.840 cpu : usr=98.60%, sys=1.02%, ctx=12, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename1: (groupid=0, jobs=1): err= 0: pid=639874: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10012msec) 00:43:16.840 slat (nsec): min=5335, max=52419, avg=25185.69, stdev=7833.39 00:43:16.840 clat (usec): min=15696, max=44147, avg=30208.47, stdev=1226.16 00:43:16.840 lat (usec): min=15715, max=44162, avg=30233.65, stdev=1225.78 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.840 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.840 | 99.00th=[30540], 99.50th=[30802], 99.90th=[44303], 99.95th=[44303], 00:43:16.840 | 99.99th=[44303] 00:43:16.840 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.840 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.840 lat (msec) : 20=0.30%, 50=99.70% 00:43:16.840 cpu : usr=98.43%, sys=1.20%, ctx=12, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename1: (groupid=0, jobs=1): err= 0: pid=639876: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10010msec) 00:43:16.840 slat (nsec): min=7711, max=85547, avg=18841.18, stdev=14900.36 00:43:16.840 clat (usec): min=10262, max=30957, avg=30096.79, stdev=1947.43 00:43:16.840 lat (usec): min=10272, max=30970, avg=30115.64, stdev=1946.25 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[15926], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:43:16.840 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:16.840 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[31065], 00:43:16.840 | 99.99th=[31065] 00:43:16.840 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=77.69, samples=20 00:43:16.840 iops : min= 512, max= 576, avg=528.00, stdev=19.42, samples=20 00:43:16.840 lat (msec) : 20=1.21%, 50=98.79% 00:43:16.840 cpu : usr=98.34%, sys=1.26%, ctx=15, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename1: (groupid=0, jobs=1): err= 0: pid=639877: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:43:16.840 slat (nsec): min=7999, max=45701, avg=19243.00, stdev=6224.29 00:43:16.840 clat (usec): min=18025, max=65795, avg=30329.85, stdev=1602.31 00:43:16.840 lat (usec): min=18044, max=65811, avg=30349.10, stdev=1601.83 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:16.840 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.840 | 99.00th=[30540], 99.50th=[30802], 99.90th=[54789], 99.95th=[54789], 00:43:16.840 | 99.99th=[65799] 00:43:16.840 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.840 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.840 lat (msec) : 20=0.34%, 50=99.35%, 100=0.30% 00:43:16.840 cpu : usr=98.56%, sys=1.05%, ctx=11, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename1: (groupid=0, jobs=1): err= 0: pid=639878: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10004msec) 00:43:16.840 slat (nsec): min=7990, max=43730, avg=14704.95, stdev=4648.15 00:43:16.840 clat (usec): min=2311, max=30876, avg=29916.87, stdev=2973.47 00:43:16.840 lat (usec): min=2337, max=30889, avg=29931.58, stdev=2973.29 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[ 9503], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:43:16.840 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:43:16.840 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:43:16.840 | 99.99th=[30802] 00:43:16.840 bw ( KiB/s): min= 2048, max= 2560, per=4.21%, avg=2128.84, stdev=122.26, samples=19 00:43:16.840 iops : min= 512, max= 640, avg=532.21, stdev=30.56, samples=19 00:43:16.840 lat (msec) : 4=0.30%, 10=0.90%, 20=0.60%, 50=98.20% 00:43:16.840 cpu : usr=98.42%, sys=1.19%, ctx=11, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename1: (groupid=0, jobs=1): err= 0: pid=639879: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10007msec) 00:43:16.840 slat (nsec): min=7189, max=90684, avg=36685.82, stdev=21292.71 00:43:16.840 clat (usec): min=10752, max=47105, avg=30060.20, stdev=1464.40 00:43:16.840 lat (usec): min=10767, max=47118, avg=30096.89, stdev=1464.88 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:43:16.840 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:43:16.840 | 99.00th=[30540], 99.50th=[30540], 99.90th=[46924], 99.95th=[46924], 00:43:16.840 | 99.99th=[46924] 00:43:16.840 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.840 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.840 lat (msec) : 20=0.27%, 50=99.73% 00:43:16.840 cpu : usr=98.63%, sys=0.98%, ctx=11, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename1: (groupid=0, jobs=1): err= 0: pid=639880: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10010msec) 00:43:16.840 slat (nsec): min=5764, max=89692, avg=35964.78, stdev=20960.41 00:43:16.840 clat (usec): min=10233, max=40116, avg=29954.81, stdev=1891.19 00:43:16.840 lat (usec): min=10248, max=40131, avg=29990.78, stdev=1892.32 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[16581], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:43:16.840 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.840 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:43:16.840 | 99.99th=[40109] 00:43:16.840 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=77.69, samples=20 00:43:16.840 iops : min= 512, max= 576, avg=528.00, stdev=19.42, samples=20 00:43:16.840 lat (msec) : 20=1.06%, 50=98.94% 00:43:16.840 cpu : usr=98.49%, sys=1.12%, ctx=13, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename2: (groupid=0, jobs=1): err= 0: pid=639881: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10012msec) 00:43:16.840 slat (nsec): min=4527, max=45420, avg=23253.51, stdev=6434.85 00:43:16.840 clat (usec): min=15786, max=45385, avg=30216.19, stdev=1235.17 00:43:16.840 lat (usec): min=15801, max=45401, avg=30239.45, stdev=1234.99 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.840 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.840 | 99.00th=[30540], 99.50th=[30802], 99.90th=[43779], 99.95th=[43779], 00:43:16.840 | 99.99th=[45351] 00:43:16.840 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2095.32, stdev=76.07, samples=19 00:43:16.840 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.840 lat (msec) : 20=0.30%, 50=99.70% 00:43:16.840 cpu : usr=98.51%, sys=1.10%, ctx=12, majf=0, minf=9 00:43:16.840 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.840 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.840 filename2: (groupid=0, jobs=1): err= 0: pid=639882: Fri Dec 13 12:48:43 2024 00:43:16.840 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10013msec) 00:43:16.840 slat (nsec): min=5226, max=49583, avg=24812.44, stdev=7463.69 00:43:16.840 clat (usec): min=15710, max=44132, avg=30205.56, stdev=1221.62 00:43:16.840 lat (usec): min=15724, max=44147, avg=30230.37, stdev=1221.30 00:43:16.840 clat percentiles (usec): 00:43:16.840 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.840 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.840 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.841 | 99.00th=[30540], 99.50th=[30802], 99.90th=[44303], 99.95th=[44303], 00:43:16.841 | 99.99th=[44303] 00:43:16.841 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.841 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.841 lat (msec) : 20=0.30%, 50=99.70% 00:43:16.841 cpu : usr=98.49%, sys=1.13%, ctx=11, majf=0, minf=9 00:43:16.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 filename2: (groupid=0, jobs=1): err= 0: pid=639883: Fri Dec 13 12:48:43 2024 00:43:16.841 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10005msec) 00:43:16.841 slat (nsec): min=5585, max=44532, avg=19920.92, stdev=6773.01 00:43:16.841 clat (usec): min=13680, max=71498, avg=30342.99, stdev=1681.77 00:43:16.841 lat (usec): min=13688, max=71514, avg=30362.91, stdev=1681.13 00:43:16.841 clat percentiles (usec): 00:43:16.841 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:16.841 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.841 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:43:16.841 | 99.00th=[30540], 99.50th=[30802], 99.90th=[54789], 99.95th=[54789], 00:43:16.841 | 99.99th=[71828] 00:43:16.841 bw ( KiB/s): min= 1920, max= 2160, per=4.15%, avg=2094.32, stdev=65.73, samples=19 00:43:16.841 iops : min= 480, max= 540, avg=523.58, stdev=16.43, samples=19 00:43:16.841 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:43:16.841 cpu : usr=98.44%, sys=1.17%, ctx=11, majf=0, minf=9 00:43:16.841 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 filename2: (groupid=0, jobs=1): err= 0: pid=639884: Fri Dec 13 12:48:43 2024 00:43:16.841 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:43:16.841 slat (nsec): min=7720, max=47751, avg=21239.17, stdev=6443.53 00:43:16.841 clat (usec): min=17943, max=54682, avg=30296.23, stdev=1510.57 00:43:16.841 lat (usec): min=17960, max=54697, avg=30317.47, stdev=1510.01 00:43:16.841 clat percentiles (usec): 00:43:16.841 | 1.00th=[30016], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:43:16.841 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:16.841 | 99.00th=[30540], 99.50th=[30802], 99.90th=[54789], 99.95th=[54789], 00:43:16.841 | 99.99th=[54789] 00:43:16.841 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.841 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.841 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 00:43:16.841 cpu : usr=98.49%, sys=1.12%, ctx=13, majf=0, minf=9 00:43:16.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 filename2: (groupid=0, jobs=1): err= 0: pid=639885: Fri Dec 13 12:48:43 2024 00:43:16.841 read: IOPS=528, BW=2114KiB/s (2165kB/s)(20.7MiB/10019msec) 00:43:16.841 slat (nsec): min=7689, max=87773, avg=21007.18, stdev=16815.09 00:43:16.841 clat (usec): min=9309, max=40128, avg=30111.30, stdev=2004.51 00:43:16.841 lat (usec): min=9319, max=40149, avg=30132.31, stdev=2003.90 00:43:16.841 clat percentiles (usec): 00:43:16.841 | 1.00th=[15795], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:43:16.841 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:16.841 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:43:16.841 | 99.00th=[30802], 99.50th=[30802], 99.90th=[39584], 99.95th=[39584], 00:43:16.841 | 99.99th=[40109] 00:43:16.841 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2112.00, stdev=70.03, samples=20 00:43:16.841 iops : min= 512, max= 576, avg=528.00, stdev=17.51, samples=20 00:43:16.841 lat (msec) : 10=0.04%, 20=1.17%, 50=98.79% 00:43:16.841 cpu : usr=98.56%, sys=1.08%, ctx=11, majf=0, minf=9 00:43:16.841 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 filename2: (groupid=0, jobs=1): err= 0: pid=639886: Fri Dec 13 12:48:43 2024 00:43:16.841 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10007msec) 00:43:16.841 slat (nsec): min=7345, max=91045, avg=36757.67, stdev=21265.42 00:43:16.841 clat (usec): min=10369, max=47188, avg=30055.38, stdev=1480.51 00:43:16.841 lat (usec): min=10384, max=47222, avg=30092.14, stdev=1481.16 00:43:16.841 clat percentiles (usec): 00:43:16.841 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:43:16.841 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:16.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:43:16.841 | 99.00th=[30540], 99.50th=[30540], 99.90th=[46924], 99.95th=[46924], 00:43:16.841 | 99.99th=[47449] 00:43:16.841 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2095.16, stdev=76.45, samples=19 00:43:16.841 iops : min= 480, max= 544, avg=523.79, stdev=19.11, samples=19 00:43:16.841 lat (msec) : 20=0.27%, 50=99.73% 00:43:16.841 cpu : usr=98.48%, sys=1.13%, ctx=13, majf=0, minf=9 00:43:16.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 filename2: (groupid=0, jobs=1): err= 0: pid=639887: Fri Dec 13 12:48:43 2024 00:43:16.841 read: IOPS=526, BW=2104KiB/s (2155kB/s)(20.6MiB/10007msec) 00:43:16.841 slat (nsec): min=8566, max=76017, avg=33144.11, stdev=15013.43 00:43:16.841 clat (usec): min=21012, max=30752, avg=30127.72, stdev=552.75 00:43:16.841 lat (usec): min=21026, max=30779, avg=30160.86, stdev=552.38 00:43:16.841 clat percentiles (usec): 00:43:16.841 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:43:16.841 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:16.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:43:16.841 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:43:16.841 | 99.99th=[30802] 00:43:16.841 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2099.20, stdev=64.34, samples=20 00:43:16.841 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:43:16.841 lat (msec) : 50=100.00% 00:43:16.841 cpu : usr=98.59%, sys=0.95%, ctx=51, majf=0, minf=9 00:43:16.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 filename2: (groupid=0, jobs=1): err= 0: pid=639888: Fri Dec 13 12:48:43 2024 00:43:16.841 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:43:16.841 slat (nsec): min=8355, max=90801, avg=38551.34, stdev=20505.75 00:43:16.841 clat (usec): min=10423, max=39235, avg=29955.32, stdev=1409.66 00:43:16.841 lat (usec): min=10440, max=39252, avg=29993.87, stdev=1411.67 00:43:16.841 clat percentiles (usec): 00:43:16.841 | 1.00th=[28443], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:43:16.841 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:16.841 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:43:16.841 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:43:16.841 | 99.99th=[39060] 00:43:16.841 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2105.80, stdev=65.57, samples=20 00:43:16.841 iops : min= 512, max= 545, avg=526.45, stdev=16.39, samples=20 00:43:16.841 lat (msec) : 20=0.61%, 50=99.39% 00:43:16.841 cpu : usr=98.48%, sys=1.11%, ctx=14, majf=0, minf=9 00:43:16.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:16.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.841 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:16.841 00:43:16.841 Run status group 0 (all jobs): 00:43:16.841 READ: bw=49.3MiB/s (51.7MB/s), 2098KiB/s-2131KiB/s (2149kB/s-2182kB/s), io=494MiB (518MB), run=10001-10019msec 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.841 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 bdev_null0 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 [2024-12-13 12:48:43.378773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 bdev_null1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:16.842 { 00:43:16.842 "params": { 00:43:16.842 "name": "Nvme$subsystem", 00:43:16.842 "trtype": "$TEST_TRANSPORT", 00:43:16.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:16.842 "adrfam": "ipv4", 00:43:16.842 "trsvcid": "$NVMF_PORT", 00:43:16.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:16.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:16.842 "hdgst": ${hdgst:-false}, 00:43:16.842 "ddgst": ${ddgst:-false} 00:43:16.842 }, 00:43:16.842 "method": "bdev_nvme_attach_controller" 00:43:16.842 } 00:43:16.842 EOF 00:43:16.842 )") 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:16.842 { 00:43:16.842 "params": { 00:43:16.842 "name": "Nvme$subsystem", 00:43:16.842 "trtype": "$TEST_TRANSPORT", 00:43:16.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:16.842 "adrfam": "ipv4", 00:43:16.842 "trsvcid": "$NVMF_PORT", 00:43:16.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:16.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:16.842 "hdgst": ${hdgst:-false}, 00:43:16.842 "ddgst": ${ddgst:-false} 00:43:16.842 }, 00:43:16.842 "method": "bdev_nvme_attach_controller" 00:43:16.842 } 00:43:16.842 EOF 00:43:16.842 )") 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:16.842 12:48:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:16.842 "params": { 00:43:16.842 "name": "Nvme0", 00:43:16.843 "trtype": "tcp", 00:43:16.843 "traddr": "10.0.0.2", 00:43:16.843 "adrfam": "ipv4", 00:43:16.843 "trsvcid": "4420", 00:43:16.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:16.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:16.843 "hdgst": false, 00:43:16.843 "ddgst": false 00:43:16.843 }, 00:43:16.843 "method": "bdev_nvme_attach_controller" 00:43:16.843 },{ 00:43:16.843 "params": { 00:43:16.843 "name": "Nvme1", 00:43:16.843 "trtype": "tcp", 00:43:16.843 "traddr": "10.0.0.2", 00:43:16.843 "adrfam": "ipv4", 00:43:16.843 "trsvcid": "4420", 00:43:16.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:16.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:16.843 "hdgst": false, 00:43:16.843 "ddgst": false 00:43:16.843 }, 00:43:16.843 "method": "bdev_nvme_attach_controller" 00:43:16.843 }' 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:16.843 12:48:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:16.843 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:16.843 ... 00:43:16.843 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:16.843 ... 00:43:16.843 fio-3.35 00:43:16.843 Starting 4 threads 00:43:22.103 00:43:22.103 filename0: (groupid=0, jobs=1): err= 0: pid=641799: Fri Dec 13 12:48:49 2024 00:43:22.103 read: IOPS=2776, BW=21.7MiB/s (22.7MB/s)(108MiB/5001msec) 00:43:22.103 slat (usec): min=6, max=187, avg= 8.73, stdev= 3.18 00:43:22.103 clat (usec): min=695, max=5248, avg=2855.18, stdev=388.51 00:43:22.103 lat (usec): min=706, max=5266, avg=2863.91, stdev=388.37 00:43:22.103 clat percentiles (usec): 00:43:22.103 | 1.00th=[ 1745], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:43:22.103 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 2966], 00:43:22.103 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3228], 95.00th=[ 3425], 00:43:22.103 | 99.00th=[ 3982], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[ 5014], 00:43:22.103 | 99.99th=[ 5276] 00:43:22.103 bw ( KiB/s): min=21280, max=23328, per=26.03%, avg=22117.33, stdev=795.15, samples=9 00:43:22.103 iops : min= 2660, max= 2916, avg=2764.67, stdev=99.39, samples=9 00:43:22.103 lat (usec) : 750=0.02%, 1000=0.02% 00:43:22.103 lat (msec) : 2=1.87%, 4=97.17%, 10=0.92% 00:43:22.103 cpu : usr=96.28%, sys=3.40%, ctx=8, majf=0, minf=9 00:43:22.103 IO depths : 1=0.3%, 2=6.4%, 4=64.6%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.103 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.103 issued rwts: total=13886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:22.103 filename0: (groupid=0, jobs=1): err= 0: pid=641800: Fri Dec 13 12:48:49 2024 00:43:22.103 read: IOPS=2597, BW=20.3MiB/s (21.3MB/s)(101MiB/5001msec) 00:43:22.103 slat (nsec): min=6180, max=35180, avg=8765.20, stdev=2950.57 00:43:22.103 clat (usec): min=635, max=5590, avg=3054.79, stdev=491.23 00:43:22.103 lat (usec): min=643, max=5601, avg=3063.56, stdev=491.09 00:43:22.103 clat percentiles (usec): 00:43:22.103 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2769], 00:43:22.103 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:22.103 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3654], 95.00th=[ 4047], 00:43:22.103 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5473], 00:43:22.103 | 99.99th=[ 5604] 00:43:22.103 bw ( KiB/s): min=19600, max=21744, per=24.58%, avg=20888.33, stdev=659.47, samples=9 00:43:22.103 iops : min= 2450, max= 2718, avg=2611.00, stdev=82.38, samples=9 00:43:22.103 lat (usec) : 750=0.02%, 1000=0.02% 00:43:22.103 lat (msec) : 2=0.78%, 4=93.84%, 10=5.34% 00:43:22.103 cpu : usr=95.64%, sys=4.06%, ctx=6, majf=0, minf=9 00:43:22.103 IO depths : 1=0.1%, 2=3.7%, 4=67.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.103 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.103 issued rwts: total=12988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:22.103 filename1: (groupid=0, jobs=1): err= 0: pid=641801: Fri Dec 13 12:48:49 2024 00:43:22.103 read: IOPS=2724, BW=21.3MiB/s (22.3MB/s)(106MiB/5002msec) 00:43:22.103 slat (nsec): min=6195, max=32253, avg=9095.65, stdev=3109.94 00:43:22.104 clat (usec): min=989, max=5521, avg=2909.19, stdev=421.24 00:43:22.104 lat (usec): min=1000, max=5533, avg=2918.29, stdev=421.20 00:43:22.104 clat percentiles (usec): 00:43:22.104 | 1.00th=[ 1860], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2606], 00:43:22.104 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:43:22.104 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3359], 95.00th=[ 3687], 00:43:22.104 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 5014], 99.95th=[ 5276], 00:43:22.104 | 99.99th=[ 5473] 00:43:22.104 bw ( KiB/s): min=21344, max=22064, per=25.52%, avg=21687.11, stdev=238.54, samples=9 00:43:22.104 iops : min= 2668, max= 2758, avg=2710.89, stdev=29.82, samples=9 00:43:22.104 lat (usec) : 1000=0.01% 00:43:22.104 lat (msec) : 2=1.49%, 4=96.18%, 10=2.32% 00:43:22.104 cpu : usr=96.40%, sys=3.26%, ctx=8, majf=0, minf=10 00:43:22.104 IO depths : 1=0.3%, 2=5.9%, 4=65.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.104 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.104 issued rwts: total=13626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:22.104 filename1: (groupid=0, jobs=1): err= 0: pid=641802: Fri Dec 13 12:48:49 2024 00:43:22.104 read: IOPS=2525, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5002msec) 00:43:22.104 slat (nsec): min=6176, max=35244, avg=8799.30, stdev=3094.16 00:43:22.104 clat (usec): min=1155, max=5650, avg=3141.09, stdev=457.28 00:43:22.104 lat (usec): min=1162, max=5657, avg=3149.89, stdev=457.06 00:43:22.104 clat percentiles (usec): 00:43:22.104 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2933], 00:43:22.104 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:43:22.104 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3687], 95.00th=[ 4113], 00:43:22.104 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5538], 00:43:22.104 | 99.99th=[ 5669] 00:43:22.104 bw ( KiB/s): min=19216, max=21088, per=23.93%, avg=20339.56, stdev=561.24, samples=9 00:43:22.104 iops : min= 2402, max= 2636, avg=2542.44, stdev=70.16, samples=9 00:43:22.104 lat (msec) : 2=0.31%, 4=93.76%, 10=5.94% 00:43:22.104 cpu : usr=96.40%, sys=3.28%, ctx=9, majf=0, minf=9 00:43:22.104 IO depths : 1=0.2%, 2=2.8%, 4=69.7%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.104 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.104 issued rwts: total=12635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:22.104 00:43:22.104 Run status group 0 (all jobs): 00:43:22.104 READ: bw=83.0MiB/s (87.0MB/s), 19.7MiB/s-21.7MiB/s (20.7MB/s-22.7MB/s), io=415MiB (435MB), run=5001-5002msec 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 00:43:22.362 real 0m24.641s 00:43:22.362 user 4m52.950s 00:43:22.362 sys 0m5.269s 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:22.362 12:48:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 ************************************ 00:43:22.362 END TEST fio_dif_rand_params 00:43:22.362 ************************************ 00:43:22.362 12:48:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:22.362 12:48:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:22.362 12:48:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:22.362 12:48:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 ************************************ 00:43:22.362 START TEST fio_dif_digest 00:43:22.362 ************************************ 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 bdev_null0 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.362 [2024-12-13 12:48:50.056001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:22.362 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:22.621 { 00:43:22.621 "params": { 00:43:22.621 "name": "Nvme$subsystem", 00:43:22.621 "trtype": "$TEST_TRANSPORT", 00:43:22.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:22.621 "adrfam": "ipv4", 00:43:22.621 "trsvcid": "$NVMF_PORT", 00:43:22.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:22.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:22.621 "hdgst": ${hdgst:-false}, 00:43:22.621 "ddgst": ${ddgst:-false} 00:43:22.621 }, 00:43:22.621 "method": "bdev_nvme_attach_controller" 00:43:22.621 } 00:43:22.621 EOF 00:43:22.621 )") 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:22.621 "params": { 00:43:22.621 "name": "Nvme0", 00:43:22.621 "trtype": "tcp", 00:43:22.621 "traddr": "10.0.0.2", 00:43:22.621 "adrfam": "ipv4", 00:43:22.621 "trsvcid": "4420", 00:43:22.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:22.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:22.621 "hdgst": true, 00:43:22.621 "ddgst": true 00:43:22.621 }, 00:43:22.621 "method": "bdev_nvme_attach_controller" 00:43:22.621 }' 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:22.621 12:48:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:22.880 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:22.880 ... 00:43:22.880 fio-3.35 00:43:22.880 Starting 3 threads 00:43:35.084 00:43:35.084 filename0: (groupid=0, jobs=1): err= 0: pid=642995: Fri Dec 13 12:49:00 2024 00:43:35.084 read: IOPS=289, BW=36.2MiB/s (37.9MB/s)(363MiB/10047msec) 00:43:35.084 slat (nsec): min=6495, max=66379, avg=19341.31, stdev=6425.60 00:43:35.084 clat (usec): min=5363, max=49744, avg=10331.08, stdev=1254.98 00:43:35.084 lat (usec): min=5372, max=49764, avg=10350.42, stdev=1255.15 00:43:35.084 clat percentiles (usec): 00:43:35.084 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:43:35.084 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:43:35.084 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:43:35.084 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13698], 99.95th=[46924], 00:43:35.084 | 99.99th=[49546] 00:43:35.084 bw ( KiB/s): min=35072, max=39168, per=34.71%, avg=37180.20, stdev=951.00, samples=20 00:43:35.084 iops : min= 274, max= 306, avg=290.45, stdev= 7.42, samples=20 00:43:35.084 lat (msec) : 10=33.09%, 20=66.84%, 50=0.07% 00:43:35.084 cpu : usr=95.37%, sys=4.05%, ctx=99, majf=0, minf=141 00:43:35.084 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:35.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.084 issued rwts: total=2907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:35.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:35.084 filename0: (groupid=0, jobs=1): err= 0: pid=642996: Fri Dec 13 12:49:00 2024 00:43:35.084 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(338MiB/10045msec) 00:43:35.084 slat (nsec): min=6530, max=45690, avg=16663.24, stdev=7525.38 00:43:35.084 clat (usec): min=7819, max=47193, avg=11101.83, stdev=1283.10 00:43:35.084 lat (usec): min=7843, max=47206, avg=11118.49, stdev=1283.53 00:43:35.084 clat percentiles (usec): 00:43:35.084 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:43:35.084 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:43:35.084 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649], 00:43:35.084 | 99.00th=[13304], 99.50th=[13829], 99.90th=[14615], 99.95th=[44303], 00:43:35.084 | 99.99th=[47449] 00:43:35.084 bw ( KiB/s): min=32512, max=37120, per=32.31%, avg=34611.20, stdev=1026.02, samples=20 00:43:35.084 iops : min= 254, max= 290, avg=270.40, stdev= 8.02, samples=20 00:43:35.084 lat (msec) : 10=8.94%, 20=90.98%, 50=0.07% 00:43:35.084 cpu : usr=96.62%, sys=3.04%, ctx=17, majf=0, minf=144 00:43:35.084 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:35.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.084 issued rwts: total=2706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:35.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:35.084 filename0: (groupid=0, jobs=1): err= 0: pid=642997: Fri Dec 13 12:49:00 2024 00:43:35.084 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(349MiB/10044msec) 00:43:35.084 slat (nsec): min=6528, max=51319, avg=17697.90, stdev=7753.32 00:43:35.084 clat (usec): min=8298, max=47248, avg=10749.77, stdev=1251.73 00:43:35.084 lat (usec): min=8325, max=47260, avg=10767.47, stdev=1251.88 00:43:35.084 clat percentiles (usec): 00:43:35.084 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:43:35.084 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:43:35.084 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11863], 95.00th=[12125], 00:43:35.084 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14615], 99.95th=[44303], 00:43:35.084 | 99.99th=[47449] 00:43:35.084 bw ( KiB/s): min=33536, max=37888, per=33.37%, avg=35737.60, stdev=973.58, samples=20 00:43:35.084 iops : min= 262, max= 296, avg=279.20, stdev= 7.61, samples=20 00:43:35.084 lat (msec) : 10=17.79%, 20=82.14%, 50=0.07% 00:43:35.084 cpu : usr=96.62%, sys=3.05%, ctx=16, majf=0, minf=140 00:43:35.084 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:35.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.084 issued rwts: total=2794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:35.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:35.084 00:43:35.084 Run status group 0 (all jobs): 00:43:35.084 READ: bw=105MiB/s (110MB/s), 33.7MiB/s-36.2MiB/s (35.3MB/s-37.9MB/s), io=1051MiB (1102MB), run=10044-10047msec 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.084 00:43:35.084 real 0m11.122s 00:43:35.084 user 0m35.648s 00:43:35.084 sys 0m1.303s 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.084 12:49:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:35.084 ************************************ 00:43:35.084 END TEST fio_dif_digest 00:43:35.084 ************************************ 00:43:35.084 12:49:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:35.084 12:49:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:35.084 rmmod nvme_tcp 00:43:35.084 rmmod nvme_fabrics 00:43:35.084 rmmod nvme_keyring 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 634209 ']' 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 634209 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 634209 ']' 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 634209 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634209 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634209' 00:43:35.084 killing process with pid 634209 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@973 -- # kill 634209 00:43:35.084 12:49:01 nvmf_dif -- common/autotest_common.sh@978 -- # wait 634209 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:35.084 12:49:01 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:36.464 Waiting for block devices as requested 00:43:36.464 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:36.723 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:36.723 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:36.982 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:36.982 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:36.982 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:36.982 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:37.240 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:37.240 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:37.240 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:37.499 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:37.499 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:37.499 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:37.758 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:37.758 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:37.758 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:37.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:38.016 12:49:05 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:38.016 12:49:05 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:38.016 12:49:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:38.016 12:49:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:38.016 12:49:05 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:38.016 12:49:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:38.017 12:49:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:38.017 12:49:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:38.017 12:49:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:38.017 12:49:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:38.017 12:49:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:39.921 12:49:07 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:39.921 00:43:39.921 real 1m14.345s 00:43:39.921 user 7m10.982s 00:43:39.921 sys 0m20.074s 00:43:39.921 12:49:07 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:39.921 12:49:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:39.921 ************************************ 00:43:39.921 END TEST nvmf_dif 00:43:39.921 ************************************ 00:43:40.181 12:49:07 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:40.181 12:49:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:40.181 12:49:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:40.181 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:43:40.181 ************************************ 00:43:40.181 START TEST nvmf_abort_qd_sizes 00:43:40.181 ************************************ 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:40.181 * Looking for test storage... 00:43:40.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:40.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.181 --rc genhtml_branch_coverage=1 00:43:40.181 --rc genhtml_function_coverage=1 00:43:40.181 --rc genhtml_legend=1 00:43:40.181 --rc geninfo_all_blocks=1 00:43:40.181 --rc geninfo_unexecuted_blocks=1 00:43:40.181 00:43:40.181 ' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:40.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.181 --rc genhtml_branch_coverage=1 00:43:40.181 --rc genhtml_function_coverage=1 00:43:40.181 --rc genhtml_legend=1 00:43:40.181 --rc geninfo_all_blocks=1 00:43:40.181 --rc geninfo_unexecuted_blocks=1 00:43:40.181 00:43:40.181 ' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:40.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.181 --rc genhtml_branch_coverage=1 00:43:40.181 --rc genhtml_function_coverage=1 00:43:40.181 --rc genhtml_legend=1 00:43:40.181 --rc geninfo_all_blocks=1 00:43:40.181 --rc geninfo_unexecuted_blocks=1 00:43:40.181 00:43:40.181 ' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:40.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.181 --rc genhtml_branch_coverage=1 00:43:40.181 --rc genhtml_function_coverage=1 00:43:40.181 --rc genhtml_legend=1 00:43:40.181 --rc geninfo_all_blocks=1 00:43:40.181 --rc geninfo_unexecuted_blocks=1 00:43:40.181 00:43:40.181 ' 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:40.181 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:40.182 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:40.182 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:40.182 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:40.442 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:40.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:40.443 12:49:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:47.017 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:47.017 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:47.017 Found net devices under 0000:af:00.0: cvl_0_0 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:47.017 Found net devices under 0000:af:00.1: cvl_0_1 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:47.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:47.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:43:47.017 00:43:47.017 --- 10.0.0.2 ping statistics --- 00:43:47.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:47.017 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:43:47.017 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:47.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:47.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:43:47.018 00:43:47.018 --- 10.0.0.1 ping statistics --- 00:43:47.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:47.018 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:43:47.018 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:47.018 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:47.018 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:47.018 12:49:13 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:48.923 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:48.923 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:49.182 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:50.119 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=650650 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 650650 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 650650 ']' 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:50.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:50.119 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:50.119 [2024-12-13 12:49:17.732097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:50.119 [2024-12-13 12:49:17.732144] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:50.119 [2024-12-13 12:49:17.811307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:50.379 [2024-12-13 12:49:17.835952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:50.379 [2024-12-13 12:49:17.835989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:50.379 [2024-12-13 12:49:17.835996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:50.379 [2024-12-13 12:49:17.836003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:50.379 [2024-12-13 12:49:17.836008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:50.379 [2024-12-13 12:49:17.837318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:50.379 [2024-12-13 12:49:17.837421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:50.379 [2024-12-13 12:49:17.837527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.379 [2024-12-13 12:49:17.837527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:50.379 12:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:50.379 ************************************ 00:43:50.379 START TEST spdk_target_abort 00:43:50.379 ************************************ 00:43:50.379 12:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:50.379 12:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:50.379 12:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:50.379 12:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.379 12:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:53.670 spdk_targetn1 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:53.670 [2024-12-13 12:49:20.836546] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:53.670 [2024-12-13 12:49:20.884867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:53.670 12:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:56.959 Initializing NVMe Controllers 00:43:56.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:56.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:56.959 Initialization complete. Launching workers. 00:43:56.959 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17001, failed: 0 00:43:56.959 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1365, failed to submit 15636 00:43:56.959 success 793, unsuccessful 572, failed 0 00:43:56.959 12:49:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:56.959 12:49:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:00.247 Initializing NVMe Controllers 00:44:00.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:00.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:00.247 Initialization complete. Launching workers. 00:44:00.247 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8553, failed: 0 00:44:00.247 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7301 00:44:00.247 success 316, unsuccessful 936, failed 0 00:44:00.247 12:49:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:00.247 12:49:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:03.540 Initializing NVMe Controllers 00:44:03.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:03.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:03.540 Initialization complete. Launching workers. 00:44:03.540 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38142, failed: 0 00:44:03.540 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2739, failed to submit 35403 00:44:03.540 success 606, unsuccessful 2133, failed 0 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.540 12:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 650650 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 650650 ']' 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 650650 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650650 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650650' 00:44:04.479 killing process with pid 650650 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 650650 00:44:04.479 12:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 650650 00:44:04.479 00:44:04.479 real 0m14.095s 00:44:04.479 user 0m53.950s 00:44:04.479 sys 0m2.325s 00:44:04.479 12:49:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:04.479 12:49:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:04.479 ************************************ 00:44:04.479 END TEST spdk_target_abort 00:44:04.479 ************************************ 00:44:04.479 12:49:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:04.479 12:49:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:04.479 12:49:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:04.479 12:49:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:04.738 ************************************ 00:44:04.738 START TEST kernel_target_abort 00:44:04.738 ************************************ 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:04.738 12:49:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:07.277 Waiting for block devices as requested 00:44:07.277 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:07.536 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:07.536 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:07.536 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:07.536 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:07.794 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:07.794 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:07.794 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:08.053 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:08.053 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:08.053 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:08.312 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:08.312 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:08.312 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:08.312 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:08.571 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:08.571 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:08.571 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:08.831 No valid GPT data, bailing 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:44:08.831 00:44:08.831 Discovery Log Number of Records 2, Generation counter 2 00:44:08.831 =====Discovery Log Entry 0====== 00:44:08.831 trtype: tcp 00:44:08.831 adrfam: ipv4 00:44:08.831 subtype: current discovery subsystem 00:44:08.831 treq: not specified, sq flow control disable supported 00:44:08.831 portid: 1 00:44:08.831 trsvcid: 4420 00:44:08.831 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:08.831 traddr: 10.0.0.1 00:44:08.831 eflags: none 00:44:08.831 sectype: none 00:44:08.831 =====Discovery Log Entry 1====== 00:44:08.831 trtype: tcp 00:44:08.831 adrfam: ipv4 00:44:08.831 subtype: nvme subsystem 00:44:08.831 treq: not specified, sq flow control disable supported 00:44:08.831 portid: 1 00:44:08.831 trsvcid: 4420 00:44:08.831 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:08.831 traddr: 10.0.0.1 00:44:08.831 eflags: none 00:44:08.831 sectype: none 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:08.831 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:08.832 12:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:12.120 Initializing NVMe Controllers 00:44:12.120 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:12.120 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:12.120 Initialization complete. Launching workers. 00:44:12.120 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95643, failed: 0 00:44:12.120 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95643, failed to submit 0 00:44:12.120 success 0, unsuccessful 95643, failed 0 00:44:12.120 12:49:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:12.120 12:49:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:15.411 Initializing NVMe Controllers 00:44:15.411 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:15.411 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:15.411 Initialization complete. Launching workers. 00:44:15.411 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 149892, failed: 0 00:44:15.411 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37586, failed to submit 112306 00:44:15.411 success 0, unsuccessful 37586, failed 0 00:44:15.411 12:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:15.411 12:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:18.702 Initializing NVMe Controllers 00:44:18.702 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:18.702 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:18.702 Initialization complete. Launching workers. 00:44:18.702 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141298, failed: 0 00:44:18.702 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35386, failed to submit 105912 00:44:18.702 success 0, unsuccessful 35386, failed 0 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:18.702 12:49:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:21.239 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:21.239 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:21.808 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:22.069 00:44:22.069 real 0m17.430s 00:44:22.069 user 0m9.188s 00:44:22.069 sys 0m4.959s 00:44:22.069 12:49:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:22.069 12:49:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:22.069 ************************************ 00:44:22.069 END TEST kernel_target_abort 00:44:22.069 ************************************ 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:22.069 rmmod nvme_tcp 00:44:22.069 rmmod nvme_fabrics 00:44:22.069 rmmod nvme_keyring 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 650650 ']' 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 650650 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 650650 ']' 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 650650 00:44:22.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (650650) - No such process 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 650650 is not found' 00:44:22.069 Process with pid 650650 is not found 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:22.069 12:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:25.361 Waiting for block devices as requested 00:44:25.361 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:25.361 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:25.361 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:25.361 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:25.361 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:25.361 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:25.361 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:25.361 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:25.620 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:25.620 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:25.620 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:25.879 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:25.879 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:25.879 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:25.879 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:26.138 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:26.138 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:26.138 12:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:28.676 12:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:28.676 00:44:28.676 real 0m48.187s 00:44:28.676 user 1m7.497s 00:44:28.676 sys 0m15.907s 00:44:28.676 12:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:28.676 12:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:28.676 ************************************ 00:44:28.676 END TEST nvmf_abort_qd_sizes 00:44:28.676 ************************************ 00:44:28.676 12:49:55 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:28.676 12:49:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:28.676 12:49:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:28.676 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:44:28.676 ************************************ 00:44:28.676 START TEST keyring_file 00:44:28.676 ************************************ 00:44:28.676 12:49:55 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:28.676 * Looking for test storage... 00:44:28.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:28.676 12:49:56 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:28.676 12:49:56 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:28.676 12:49:56 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:28.676 12:49:56 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:28.676 12:49:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:28.676 12:49:56 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:28.676 12:49:56 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:28.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.677 --rc genhtml_branch_coverage=1 00:44:28.677 --rc genhtml_function_coverage=1 00:44:28.677 --rc genhtml_legend=1 00:44:28.677 --rc geninfo_all_blocks=1 00:44:28.677 --rc geninfo_unexecuted_blocks=1 00:44:28.677 00:44:28.677 ' 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:28.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.677 --rc genhtml_branch_coverage=1 00:44:28.677 --rc genhtml_function_coverage=1 00:44:28.677 --rc genhtml_legend=1 00:44:28.677 --rc geninfo_all_blocks=1 00:44:28.677 --rc geninfo_unexecuted_blocks=1 00:44:28.677 00:44:28.677 ' 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:28.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.677 --rc genhtml_branch_coverage=1 00:44:28.677 --rc genhtml_function_coverage=1 00:44:28.677 --rc genhtml_legend=1 00:44:28.677 --rc geninfo_all_blocks=1 00:44:28.677 --rc geninfo_unexecuted_blocks=1 00:44:28.677 00:44:28.677 ' 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:28.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.677 --rc genhtml_branch_coverage=1 00:44:28.677 --rc genhtml_function_coverage=1 00:44:28.677 --rc genhtml_legend=1 00:44:28.677 --rc geninfo_all_blocks=1 00:44:28.677 --rc geninfo_unexecuted_blocks=1 00:44:28.677 00:44:28.677 ' 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:28.677 12:49:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:28.677 12:49:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:28.677 12:49:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:28.677 12:49:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:28.677 12:49:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.677 12:49:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.677 12:49:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.677 12:49:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:28.677 12:49:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:28.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DGM2IYJO1A 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DGM2IYJO1A 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DGM2IYJO1A 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DGM2IYJO1A 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LeVESVvGCC 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:28.677 12:49:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LeVESVvGCC 00:44:28.677 12:49:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LeVESVvGCC 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LeVESVvGCC 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=659228 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 659228 00:44:28.677 12:49:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659228 ']' 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:28.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:28.677 12:49:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:28.677 [2024-12-13 12:49:56.291198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:28.677 [2024-12-13 12:49:56.291248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659228 ] 00:44:28.677 [2024-12-13 12:49:56.364100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:28.937 [2024-12-13 12:49:56.387009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:28.937 12:49:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:28.937 12:49:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:28.937 12:49:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:28.937 12:49:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.937 12:49:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:28.937 [2024-12-13 12:49:56.587095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:28.937 null0 00:44:28.937 [2024-12-13 12:49:56.619142] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:28.937 [2024-12-13 12:49:56.619431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.197 12:49:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:29.197 [2024-12-13 12:49:56.647206] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:29.197 request: 00:44:29.197 { 00:44:29.197 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:29.197 "secure_channel": false, 00:44:29.197 "listen_address": { 00:44:29.197 "trtype": "tcp", 00:44:29.197 "traddr": "127.0.0.1", 00:44:29.197 "trsvcid": "4420" 00:44:29.197 }, 00:44:29.197 "method": "nvmf_subsystem_add_listener", 00:44:29.197 "req_id": 1 00:44:29.197 } 00:44:29.197 Got JSON-RPC error response 00:44:29.197 response: 00:44:29.197 { 00:44:29.197 "code": -32602, 00:44:29.197 "message": "Invalid parameters" 00:44:29.197 } 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:29.197 12:49:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=659234 00:44:29.197 12:49:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 659234 /var/tmp/bperf.sock 00:44:29.197 12:49:56 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659234 ']' 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:29.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:29.197 [2024-12-13 12:49:56.702381] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:29.197 [2024-12-13 12:49:56.702429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659234 ] 00:44:29.197 [2024-12-13 12:49:56.773037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.197 [2024-12-13 12:49:56.795633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:29.197 12:49:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:29.197 12:49:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:29.197 12:49:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:29.456 12:49:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LeVESVvGCC 00:44:29.456 12:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LeVESVvGCC 00:44:29.716 12:49:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:29.716 12:49:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:29.716 12:49:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.716 12:49:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:29.716 12:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.975 12:49:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DGM2IYJO1A == \/\t\m\p\/\t\m\p\.\D\G\M\2\I\Y\J\O\1\A ]] 00:44:29.975 12:49:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:29.975 12:49:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:29.975 12:49:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.975 12:49:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:29.975 12:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.242 12:49:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.LeVESVvGCC == \/\t\m\p\/\t\m\p\.\L\e\V\E\S\V\v\G\C\C ]] 00:44:30.242 12:49:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.242 12:49:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:30.242 12:49:57 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:30.242 12:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.503 12:49:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:30.503 12:49:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:30.503 12:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:30.762 [2024-12-13 12:49:58.265975] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:30.762 nvme0n1 00:44:30.762 12:49:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:30.762 12:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:30.762 12:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.762 12:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.762 12:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:30.762 12:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.020 12:49:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:31.020 12:49:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:31.020 12:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:31.020 12:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:31.020 12:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:31.020 12:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:31.020 12:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.278 12:49:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:31.278 12:49:58 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:31.278 Running I/O for 1 seconds... 00:44:32.215 19256.00 IOPS, 75.22 MiB/s 00:44:32.215 Latency(us) 00:44:32.215 [2024-12-13T11:49:59.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:32.215 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:32.215 nvme0n1 : 1.00 19304.00 75.41 0.00 0.00 6618.04 2777.48 17975.59 00:44:32.215 [2024-12-13T11:49:59.915Z] =================================================================================================================== 00:44:32.215 [2024-12-13T11:49:59.915Z] Total : 19304.00 75.41 0.00 0.00 6618.04 2777.48 17975.59 00:44:32.215 { 00:44:32.215 "results": [ 00:44:32.215 { 00:44:32.215 "job": "nvme0n1", 00:44:32.215 "core_mask": "0x2", 00:44:32.215 "workload": "randrw", 00:44:32.215 "percentage": 50, 00:44:32.215 "status": "finished", 00:44:32.215 "queue_depth": 128, 00:44:32.215 "io_size": 4096, 00:44:32.215 "runtime": 1.004196, 00:44:32.215 "iops": 19304.000414261758, 00:44:32.215 "mibps": 75.40625161820999, 00:44:32.215 "io_failed": 0, 00:44:32.215 "io_timeout": 0, 00:44:32.215 "avg_latency_us": 6618.044900499895, 00:44:32.215 "min_latency_us": 2777.478095238095, 00:44:32.215 "max_latency_us": 17975.588571428572 00:44:32.215 } 00:44:32.215 ], 00:44:32.215 "core_count": 1 00:44:32.215 } 00:44:32.215 12:49:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:32.215 12:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:32.474 12:50:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:32.474 12:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:32.474 12:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.474 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.474 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.474 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.733 12:50:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:32.733 12:50:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:32.733 12:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:32.733 12:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.733 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.733 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:32.733 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.993 12:50:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:32.993 12:50:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:32.993 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:32.993 [2024-12-13 12:50:00.671681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:32.993 [2024-12-13 12:50:00.672395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b950 (107): Transport endpoint is not connected 00:44:32.993 [2024-12-13 12:50:00.673388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b950 (9): Bad file descriptor 00:44:32.993 [2024-12-13 12:50:00.674390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:32.993 [2024-12-13 12:50:00.674403] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:32.993 [2024-12-13 12:50:00.674410] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:32.993 [2024-12-13 12:50:00.674418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:32.993 request: 00:44:32.993 { 00:44:32.993 "name": "nvme0", 00:44:32.993 "trtype": "tcp", 00:44:32.993 "traddr": "127.0.0.1", 00:44:32.993 "adrfam": "ipv4", 00:44:32.993 "trsvcid": "4420", 00:44:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:32.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:32.993 "prchk_reftag": false, 00:44:32.993 "prchk_guard": false, 00:44:32.993 "hdgst": false, 00:44:32.993 "ddgst": false, 00:44:32.993 "psk": "key1", 00:44:32.993 "allow_unrecognized_csi": false, 00:44:32.993 "method": "bdev_nvme_attach_controller", 00:44:32.993 "req_id": 1 00:44:32.993 } 00:44:32.993 Got JSON-RPC error response 00:44:32.993 response: 00:44:32.993 { 00:44:32.993 "code": -5, 00:44:32.993 "message": "Input/output error" 00:44:32.993 } 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:32.993 12:50:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:32.993 12:50:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:33.252 12:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:33.253 12:50:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:33.253 12:50:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:33.253 12:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:33.512 12:50:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:33.512 12:50:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:33.512 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:33.770 12:50:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:33.770 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:33.770 12:50:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:33.770 12:50:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:33.770 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.029 12:50:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:34.029 12:50:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.DGM2IYJO1A 00:44:34.029 12:50:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:34.029 12:50:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:34.029 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:34.288 [2024-12-13 12:50:01.827458] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DGM2IYJO1A': 0100660 00:44:34.288 [2024-12-13 12:50:01.827485] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:34.288 request: 00:44:34.288 { 00:44:34.288 "name": "key0", 00:44:34.288 "path": "/tmp/tmp.DGM2IYJO1A", 00:44:34.288 "method": "keyring_file_add_key", 00:44:34.288 "req_id": 1 00:44:34.288 } 00:44:34.288 Got JSON-RPC error response 00:44:34.288 response: 00:44:34.288 { 00:44:34.288 "code": -1, 00:44:34.288 "message": "Operation not permitted" 00:44:34.288 } 00:44:34.288 12:50:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:34.288 12:50:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:34.288 12:50:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:34.288 12:50:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:34.288 12:50:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.DGM2IYJO1A 00:44:34.288 12:50:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:34.288 12:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DGM2IYJO1A 00:44:34.607 12:50:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.DGM2IYJO1A 00:44:34.607 12:50:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:34.607 12:50:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:34.607 12:50:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:34.607 12:50:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:34.607 12:50:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:34.607 12:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.607 12:50:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:34.607 12:50:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:34.607 12:50:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:34.607 12:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:34.952 [2024-12-13 12:50:02.404994] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DGM2IYJO1A': No such file or directory 00:44:34.952 [2024-12-13 12:50:02.405017] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:34.952 [2024-12-13 12:50:02.405034] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:34.952 [2024-12-13 12:50:02.405056] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:34.952 [2024-12-13 12:50:02.405069] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:34.952 [2024-12-13 12:50:02.405076] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:34.952 request: 00:44:34.952 { 00:44:34.952 "name": "nvme0", 00:44:34.952 "trtype": "tcp", 00:44:34.952 "traddr": "127.0.0.1", 00:44:34.952 "adrfam": "ipv4", 00:44:34.952 "trsvcid": "4420", 00:44:34.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:34.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:34.952 "prchk_reftag": false, 00:44:34.952 "prchk_guard": false, 00:44:34.952 "hdgst": false, 00:44:34.952 "ddgst": false, 00:44:34.952 "psk": "key0", 00:44:34.952 "allow_unrecognized_csi": false, 00:44:34.952 "method": "bdev_nvme_attach_controller", 00:44:34.952 "req_id": 1 00:44:34.952 } 00:44:34.952 Got JSON-RPC error response 00:44:34.952 response: 00:44:34.952 { 00:44:34.952 "code": -19, 00:44:34.952 "message": "No such device" 00:44:34.952 } 00:44:34.952 12:50:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:34.952 12:50:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:34.952 12:50:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:34.952 12:50:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:34.952 12:50:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:34.952 12:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:35.239 12:50:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zQLA0GkMK8 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:35.239 12:50:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:35.239 12:50:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:35.239 12:50:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:35.239 12:50:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:35.239 12:50:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:35.239 12:50:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zQLA0GkMK8 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zQLA0GkMK8 00:44:35.239 12:50:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.zQLA0GkMK8 00:44:35.239 12:50:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zQLA0GkMK8 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zQLA0GkMK8 00:44:35.239 12:50:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:35.239 12:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:35.509 nvme0n1 00:44:35.509 12:50:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:35.509 12:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:35.509 12:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:35.509 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:35.509 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:35.509 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.768 12:50:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:35.768 12:50:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:35.768 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:36.027 12:50:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:36.027 12:50:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.027 12:50:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:36.027 12:50:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.027 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.285 12:50:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:36.285 12:50:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:36.285 12:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:36.544 12:50:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:36.544 12:50:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:36.544 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.803 12:50:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:36.803 12:50:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zQLA0GkMK8 00:44:36.803 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zQLA0GkMK8 00:44:36.803 12:50:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LeVESVvGCC 00:44:36.803 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LeVESVvGCC 00:44:37.063 12:50:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.063 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.322 nvme0n1 00:44:37.322 12:50:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:37.322 12:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:37.583 12:50:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:37.584 "subsystems": [ 00:44:37.584 { 00:44:37.584 "subsystem": "keyring", 00:44:37.584 "config": [ 00:44:37.584 { 00:44:37.584 "method": "keyring_file_add_key", 00:44:37.584 "params": { 00:44:37.584 "name": "key0", 00:44:37.584 "path": "/tmp/tmp.zQLA0GkMK8" 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "keyring_file_add_key", 00:44:37.584 "params": { 00:44:37.584 "name": "key1", 00:44:37.584 "path": "/tmp/tmp.LeVESVvGCC" 00:44:37.584 } 00:44:37.584 } 00:44:37.584 ] 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "subsystem": "iobuf", 00:44:37.584 "config": [ 00:44:37.584 { 00:44:37.584 "method": "iobuf_set_options", 00:44:37.584 "params": { 00:44:37.584 "small_pool_count": 8192, 00:44:37.584 "large_pool_count": 1024, 00:44:37.584 "small_bufsize": 8192, 00:44:37.584 "large_bufsize": 135168, 00:44:37.584 "enable_numa": false 00:44:37.584 } 00:44:37.584 } 00:44:37.584 ] 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "subsystem": "sock", 00:44:37.584 "config": [ 00:44:37.584 { 00:44:37.584 "method": "sock_set_default_impl", 00:44:37.584 "params": { 00:44:37.584 "impl_name": "posix" 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "sock_impl_set_options", 00:44:37.584 "params": { 00:44:37.584 "impl_name": "ssl", 00:44:37.584 "recv_buf_size": 4096, 00:44:37.584 "send_buf_size": 4096, 00:44:37.584 "enable_recv_pipe": true, 00:44:37.584 "enable_quickack": false, 00:44:37.584 "enable_placement_id": 0, 00:44:37.584 "enable_zerocopy_send_server": true, 00:44:37.584 "enable_zerocopy_send_client": false, 00:44:37.584 "zerocopy_threshold": 0, 00:44:37.584 "tls_version": 0, 00:44:37.584 "enable_ktls": false 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "sock_impl_set_options", 00:44:37.584 "params": { 00:44:37.584 "impl_name": "posix", 00:44:37.584 "recv_buf_size": 2097152, 00:44:37.584 "send_buf_size": 2097152, 00:44:37.584 "enable_recv_pipe": true, 00:44:37.584 "enable_quickack": false, 00:44:37.584 "enable_placement_id": 0, 00:44:37.584 "enable_zerocopy_send_server": true, 00:44:37.584 "enable_zerocopy_send_client": false, 00:44:37.584 "zerocopy_threshold": 0, 00:44:37.584 "tls_version": 0, 00:44:37.584 "enable_ktls": false 00:44:37.584 } 00:44:37.584 } 00:44:37.584 ] 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "subsystem": "vmd", 00:44:37.584 "config": [] 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "subsystem": "accel", 00:44:37.584 "config": [ 00:44:37.584 { 00:44:37.584 "method": "accel_set_options", 00:44:37.584 "params": { 00:44:37.584 "small_cache_size": 128, 00:44:37.584 "large_cache_size": 16, 00:44:37.584 "task_count": 2048, 00:44:37.584 "sequence_count": 2048, 00:44:37.584 "buf_count": 2048 00:44:37.584 } 00:44:37.584 } 00:44:37.584 ] 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "subsystem": "bdev", 00:44:37.584 "config": [ 00:44:37.584 { 00:44:37.584 "method": "bdev_set_options", 00:44:37.584 "params": { 00:44:37.584 "bdev_io_pool_size": 65535, 00:44:37.584 "bdev_io_cache_size": 256, 00:44:37.584 "bdev_auto_examine": true, 00:44:37.584 "iobuf_small_cache_size": 128, 00:44:37.584 "iobuf_large_cache_size": 16 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "bdev_raid_set_options", 00:44:37.584 "params": { 00:44:37.584 "process_window_size_kb": 1024, 00:44:37.584 "process_max_bandwidth_mb_sec": 0 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "bdev_iscsi_set_options", 00:44:37.584 "params": { 00:44:37.584 "timeout_sec": 30 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "bdev_nvme_set_options", 00:44:37.584 "params": { 00:44:37.584 "action_on_timeout": "none", 00:44:37.584 "timeout_us": 0, 00:44:37.584 "timeout_admin_us": 0, 00:44:37.584 "keep_alive_timeout_ms": 10000, 00:44:37.584 "arbitration_burst": 0, 00:44:37.584 "low_priority_weight": 0, 00:44:37.584 "medium_priority_weight": 0, 00:44:37.584 "high_priority_weight": 0, 00:44:37.584 "nvme_adminq_poll_period_us": 10000, 00:44:37.584 "nvme_ioq_poll_period_us": 0, 00:44:37.584 "io_queue_requests": 512, 00:44:37.584 "delay_cmd_submit": true, 00:44:37.584 "transport_retry_count": 4, 00:44:37.584 "bdev_retry_count": 3, 00:44:37.584 "transport_ack_timeout": 0, 00:44:37.584 "ctrlr_loss_timeout_sec": 0, 00:44:37.584 "reconnect_delay_sec": 0, 00:44:37.584 "fast_io_fail_timeout_sec": 0, 00:44:37.584 "disable_auto_failback": false, 00:44:37.584 "generate_uuids": false, 00:44:37.584 "transport_tos": 0, 00:44:37.584 "nvme_error_stat": false, 00:44:37.584 "rdma_srq_size": 0, 00:44:37.584 "io_path_stat": false, 00:44:37.584 "allow_accel_sequence": false, 00:44:37.584 "rdma_max_cq_size": 0, 00:44:37.584 "rdma_cm_event_timeout_ms": 0, 00:44:37.584 "dhchap_digests": [ 00:44:37.584 "sha256", 00:44:37.584 "sha384", 00:44:37.584 "sha512" 00:44:37.584 ], 00:44:37.584 "dhchap_dhgroups": [ 00:44:37.584 "null", 00:44:37.584 "ffdhe2048", 00:44:37.584 "ffdhe3072", 00:44:37.584 "ffdhe4096", 00:44:37.584 "ffdhe6144", 00:44:37.584 "ffdhe8192" 00:44:37.584 ], 00:44:37.584 "rdma_umr_per_io": false 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "bdev_nvme_attach_controller", 00:44:37.584 "params": { 00:44:37.584 "name": "nvme0", 00:44:37.584 "trtype": "TCP", 00:44:37.584 "adrfam": "IPv4", 00:44:37.584 "traddr": "127.0.0.1", 00:44:37.584 "trsvcid": "4420", 00:44:37.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:37.584 "prchk_reftag": false, 00:44:37.584 "prchk_guard": false, 00:44:37.584 "ctrlr_loss_timeout_sec": 0, 00:44:37.584 "reconnect_delay_sec": 0, 00:44:37.584 "fast_io_fail_timeout_sec": 0, 00:44:37.584 "psk": "key0", 00:44:37.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:37.584 "hdgst": false, 00:44:37.584 "ddgst": false, 00:44:37.584 "multipath": "multipath" 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "bdev_nvme_set_hotplug", 00:44:37.584 "params": { 00:44:37.584 "period_us": 100000, 00:44:37.584 "enable": false 00:44:37.584 } 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "method": "bdev_wait_for_examine" 00:44:37.584 } 00:44:37.584 ] 00:44:37.584 }, 00:44:37.584 { 00:44:37.584 "subsystem": "nbd", 00:44:37.584 "config": [] 00:44:37.584 } 00:44:37.584 ] 00:44:37.584 }' 00:44:37.584 12:50:05 keyring_file -- keyring/file.sh@115 -- # killprocess 659234 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659234 ']' 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659234 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659234 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659234' 00:44:37.584 killing process with pid 659234 00:44:37.584 12:50:05 keyring_file -- common/autotest_common.sh@973 -- # kill 659234 00:44:37.584 Received shutdown signal, test time was about 1.000000 seconds 00:44:37.584 00:44:37.585 Latency(us) 00:44:37.585 [2024-12-13T11:50:05.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:37.585 [2024-12-13T11:50:05.285Z] =================================================================================================================== 00:44:37.585 [2024-12-13T11:50:05.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:37.585 12:50:05 keyring_file -- common/autotest_common.sh@978 -- # wait 659234 00:44:37.845 12:50:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=660724 00:44:37.845 12:50:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 660724 /var/tmp/bperf.sock 00:44:37.845 12:50:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 660724 ']' 00:44:37.845 12:50:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:37.845 12:50:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:37.845 12:50:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:37.845 12:50:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:37.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:37.845 12:50:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:37.845 "subsystems": [ 00:44:37.845 { 00:44:37.845 "subsystem": "keyring", 00:44:37.845 "config": [ 00:44:37.845 { 00:44:37.845 "method": "keyring_file_add_key", 00:44:37.845 "params": { 00:44:37.845 "name": "key0", 00:44:37.845 "path": "/tmp/tmp.zQLA0GkMK8" 00:44:37.845 } 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "method": "keyring_file_add_key", 00:44:37.845 "params": { 00:44:37.845 "name": "key1", 00:44:37.845 "path": "/tmp/tmp.LeVESVvGCC" 00:44:37.845 } 00:44:37.845 } 00:44:37.845 ] 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "subsystem": "iobuf", 00:44:37.845 "config": [ 00:44:37.845 { 00:44:37.845 "method": "iobuf_set_options", 00:44:37.845 "params": { 00:44:37.845 "small_pool_count": 8192, 00:44:37.845 "large_pool_count": 1024, 00:44:37.845 "small_bufsize": 8192, 00:44:37.845 "large_bufsize": 135168, 00:44:37.845 "enable_numa": false 00:44:37.845 } 00:44:37.845 } 00:44:37.845 ] 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "subsystem": "sock", 00:44:37.845 "config": [ 00:44:37.845 { 00:44:37.845 "method": "sock_set_default_impl", 00:44:37.845 "params": { 00:44:37.845 "impl_name": "posix" 00:44:37.845 } 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "method": "sock_impl_set_options", 00:44:37.845 "params": { 00:44:37.845 "impl_name": "ssl", 00:44:37.845 "recv_buf_size": 4096, 00:44:37.845 "send_buf_size": 4096, 00:44:37.845 "enable_recv_pipe": true, 00:44:37.845 "enable_quickack": false, 00:44:37.845 "enable_placement_id": 0, 00:44:37.845 "enable_zerocopy_send_server": true, 00:44:37.845 "enable_zerocopy_send_client": false, 00:44:37.845 "zerocopy_threshold": 0, 00:44:37.845 "tls_version": 0, 00:44:37.845 "enable_ktls": false 00:44:37.845 } 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "method": "sock_impl_set_options", 00:44:37.845 "params": { 00:44:37.845 "impl_name": "posix", 00:44:37.845 "recv_buf_size": 2097152, 00:44:37.845 "send_buf_size": 2097152, 00:44:37.845 "enable_recv_pipe": true, 00:44:37.845 "enable_quickack": false, 00:44:37.845 "enable_placement_id": 0, 00:44:37.845 "enable_zerocopy_send_server": true, 00:44:37.845 "enable_zerocopy_send_client": false, 00:44:37.845 "zerocopy_threshold": 0, 00:44:37.845 "tls_version": 0, 00:44:37.845 "enable_ktls": false 00:44:37.845 } 00:44:37.845 } 00:44:37.845 ] 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "subsystem": "vmd", 00:44:37.845 "config": [] 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "subsystem": "accel", 00:44:37.845 "config": [ 00:44:37.845 { 00:44:37.845 "method": "accel_set_options", 00:44:37.845 "params": { 00:44:37.845 "small_cache_size": 128, 00:44:37.845 "large_cache_size": 16, 00:44:37.845 "task_count": 2048, 00:44:37.845 "sequence_count": 2048, 00:44:37.845 "buf_count": 2048 00:44:37.845 } 00:44:37.845 } 00:44:37.845 ] 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "subsystem": "bdev", 00:44:37.845 "config": [ 00:44:37.845 { 00:44:37.845 "method": "bdev_set_options", 00:44:37.845 "params": { 00:44:37.845 "bdev_io_pool_size": 65535, 00:44:37.845 "bdev_io_cache_size": 256, 00:44:37.845 "bdev_auto_examine": true, 00:44:37.845 "iobuf_small_cache_size": 128, 00:44:37.845 "iobuf_large_cache_size": 16 00:44:37.845 } 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "method": "bdev_raid_set_options", 00:44:37.845 "params": { 00:44:37.845 "process_window_size_kb": 1024, 00:44:37.845 "process_max_bandwidth_mb_sec": 0 00:44:37.845 } 00:44:37.845 }, 00:44:37.845 { 00:44:37.845 "method": "bdev_iscsi_set_options", 00:44:37.845 "params": { 00:44:37.846 "timeout_sec": 30 00:44:37.846 } 00:44:37.846 }, 00:44:37.846 { 00:44:37.846 "method": "bdev_nvme_set_options", 00:44:37.846 "params": { 00:44:37.846 "action_on_timeout": "none", 00:44:37.846 "timeout_us": 0, 00:44:37.846 "timeout_admin_us": 0, 00:44:37.846 "keep_alive_timeout_ms": 10000, 00:44:37.846 "arbitration_burst": 0, 00:44:37.846 "low_priority_weight": 0, 00:44:37.846 "medium_priority_weight": 0, 00:44:37.846 "high_priority_weight": 0, 00:44:37.846 "nvme_adminq_poll_period_us": 10000, 00:44:37.846 "nvme_ioq_poll_period_us": 0, 00:44:37.846 "io_queue_requests": 512, 00:44:37.846 "delay_cmd_submit": true, 00:44:37.846 "transport_retry_count": 4, 00:44:37.846 "bdev_retry_count": 3, 00:44:37.846 "transport_ack_timeout": 0, 00:44:37.846 "ctrlr_loss_timeout_sec": 0, 00:44:37.846 "reconnect_delay_sec": 0, 00:44:37.846 "fast_io_fail_timeout_sec": 0, 00:44:37.846 "disable_auto_failback": false, 00:44:37.846 "generate_uuids": false, 00:44:37.846 "transport_tos": 0, 00:44:37.846 "nvme_error_stat": false, 00:44:37.846 "rdma_srq_size": 0, 00:44:37.846 "io_path_stat": false, 00:44:37.846 "allow_accel_sequence": false, 00:44:37.846 "rdma_max_cq_size": 0, 00:44:37.846 "rdma_cm_event_timeout_ms": 0, 00:44:37.846 "dhchap_digests": [ 00:44:37.846 "sha256", 00:44:37.846 "sha384", 00:44:37.846 "sha512" 00:44:37.846 ], 00:44:37.846 "dhchap_dhgroups": [ 00:44:37.846 "null", 00:44:37.846 "ffdhe2048", 00:44:37.846 "ffdhe3072", 00:44:37.846 "ffdhe4096", 00:44:37.846 "ffdhe6144", 00:44:37.846 "ffdhe8192" 00:44:37.846 ], 00:44:37.846 "rdma_umr_per_io": false 00:44:37.846 } 00:44:37.846 }, 00:44:37.846 { 00:44:37.846 "method": "bdev_nvme_attach_controller", 00:44:37.846 "params": { 00:44:37.846 "name": "nvme0", 00:44:37.846 "trtype": "TCP", 00:44:37.846 "adrfam": "IPv4", 00:44:37.846 "traddr": "127.0.0.1", 00:44:37.846 "trsvcid": "4420", 00:44:37.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:37.846 "prchk_reftag": false, 00:44:37.846 "prchk_guard": false, 00:44:37.846 "ctrlr_loss_timeout_sec": 0, 00:44:37.846 "reconnect_delay_sec": 0, 00:44:37.846 "fast_io_fail_timeout_sec": 0, 00:44:37.846 "psk": "key0", 00:44:37.846 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:37.846 "hdgst": false, 00:44:37.846 "ddgst": false, 00:44:37.846 "multipath": "multipath" 00:44:37.846 } 00:44:37.846 }, 00:44:37.846 { 00:44:37.846 "method": "bdev_nvme_set_hotplug", 00:44:37.846 "params": { 00:44:37.846 "period_us": 100000, 00:44:37.846 "enable": false 00:44:37.846 } 00:44:37.846 }, 00:44:37.846 { 00:44:37.846 "method": "bdev_wait_for_examine" 00:44:37.846 } 00:44:37.846 ] 00:44:37.846 }, 00:44:37.846 { 00:44:37.846 "subsystem": "nbd", 00:44:37.846 "config": [] 00:44:37.846 } 00:44:37.846 ] 00:44:37.846 }' 00:44:37.846 12:50:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:37.846 12:50:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:37.846 [2024-12-13 12:50:05.453745] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:37.846 [2024-12-13 12:50:05.453798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660724 ] 00:44:37.846 [2024-12-13 12:50:05.525728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:38.105 [2024-12-13 12:50:05.545239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:38.105 [2024-12-13 12:50:05.701053] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:38.673 12:50:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.673 12:50:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:38.673 12:50:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:38.673 12:50:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:38.673 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.932 12:50:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:38.932 12:50:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:38.932 12:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:38.932 12:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:38.932 12:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:38.932 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.932 12:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.191 12:50:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:39.191 12:50:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:39.191 12:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:39.191 12:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.191 12:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.191 12:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:39.191 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.450 12:50:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:39.450 12:50:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:39.450 12:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:39.450 12:50:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:39.450 12:50:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:39.450 12:50:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:39.450 12:50:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.zQLA0GkMK8 /tmp/tmp.LeVESVvGCC 00:44:39.450 12:50:07 keyring_file -- keyring/file.sh@20 -- # killprocess 660724 00:44:39.450 12:50:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 660724 ']' 00:44:39.450 12:50:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 660724 00:44:39.450 12:50:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:39.450 12:50:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:39.451 12:50:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660724 00:44:39.451 12:50:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:39.451 12:50:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:39.451 12:50:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660724' 00:44:39.451 killing process with pid 660724 00:44:39.451 12:50:07 keyring_file -- common/autotest_common.sh@973 -- # kill 660724 00:44:39.451 Received shutdown signal, test time was about 1.000000 seconds 00:44:39.451 00:44:39.451 Latency(us) 00:44:39.451 [2024-12-13T11:50:07.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:39.451 [2024-12-13T11:50:07.151Z] =================================================================================================================== 00:44:39.451 [2024-12-13T11:50:07.151Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:39.451 12:50:07 keyring_file -- common/autotest_common.sh@978 -- # wait 660724 00:44:39.710 12:50:07 keyring_file -- keyring/file.sh@21 -- # killprocess 659228 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659228 ']' 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659228 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659228 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659228' 00:44:39.710 killing process with pid 659228 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@973 -- # kill 659228 00:44:39.710 12:50:07 keyring_file -- common/autotest_common.sh@978 -- # wait 659228 00:44:39.970 00:44:39.970 real 0m11.710s 00:44:39.970 user 0m29.216s 00:44:39.970 sys 0m2.706s 00:44:39.970 12:50:07 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:39.970 12:50:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:39.970 ************************************ 00:44:39.970 END TEST keyring_file 00:44:39.970 ************************************ 00:44:40.229 12:50:07 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:40.229 12:50:07 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:40.230 12:50:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:40.230 12:50:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:40.230 12:50:07 -- common/autotest_common.sh@10 -- # set +x 00:44:40.230 ************************************ 00:44:40.230 START TEST keyring_linux 00:44:40.230 ************************************ 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:40.230 Joined session keyring: 8416207 00:44:40.230 * Looking for test storage... 00:44:40.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:40.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.230 --rc genhtml_branch_coverage=1 00:44:40.230 --rc genhtml_function_coverage=1 00:44:40.230 --rc genhtml_legend=1 00:44:40.230 --rc geninfo_all_blocks=1 00:44:40.230 --rc geninfo_unexecuted_blocks=1 00:44:40.230 00:44:40.230 ' 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:40.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.230 --rc genhtml_branch_coverage=1 00:44:40.230 --rc genhtml_function_coverage=1 00:44:40.230 --rc genhtml_legend=1 00:44:40.230 --rc geninfo_all_blocks=1 00:44:40.230 --rc geninfo_unexecuted_blocks=1 00:44:40.230 00:44:40.230 ' 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:40.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.230 --rc genhtml_branch_coverage=1 00:44:40.230 --rc genhtml_function_coverage=1 00:44:40.230 --rc genhtml_legend=1 00:44:40.230 --rc geninfo_all_blocks=1 00:44:40.230 --rc geninfo_unexecuted_blocks=1 00:44:40.230 00:44:40.230 ' 00:44:40.230 12:50:07 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:40.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:40.230 --rc genhtml_branch_coverage=1 00:44:40.230 --rc genhtml_function_coverage=1 00:44:40.230 --rc genhtml_legend=1 00:44:40.230 --rc geninfo_all_blocks=1 00:44:40.230 --rc geninfo_unexecuted_blocks=1 00:44:40.230 00:44:40.230 ' 00:44:40.230 12:50:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:40.230 12:50:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:40.230 12:50:07 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:40.230 12:50:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:40.490 12:50:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:40.490 12:50:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:40.490 12:50:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:40.490 12:50:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.490 12:50:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.490 12:50:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.490 12:50:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:40.490 12:50:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:40.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:40.490 12:50:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:40.490 12:50:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:40.490 12:50:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:40.490 12:50:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:40.490 12:50:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:40.490 12:50:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:40.490 12:50:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:40.490 12:50:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:40.490 12:50:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:40.490 12:50:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:40.491 /tmp/:spdk-test:key0 00:44:40.491 12:50:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:40.491 12:50:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:40.491 12:50:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:40.491 12:50:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:40.491 12:50:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:40.491 /tmp/:spdk-test:key1 00:44:40.491 12:50:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:40.491 12:50:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=661259 00:44:40.491 12:50:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 661259 00:44:40.491 12:50:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661259 ']' 00:44:40.491 12:50:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:40.491 12:50:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:40.491 12:50:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:40.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:40.491 12:50:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:40.491 12:50:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:40.491 [2024-12-13 12:50:08.076891] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:40.491 [2024-12-13 12:50:08.076940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661259 ] 00:44:40.491 [2024-12-13 12:50:08.149390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.491 [2024-12-13 12:50:08.171462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.750 12:50:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:40.750 12:50:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:40.750 12:50:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:40.750 12:50:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.750 12:50:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:40.750 [2024-12-13 12:50:08.384542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:40.750 null0 00:44:40.750 [2024-12-13 12:50:08.416576] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:40.750 [2024-12-13 12:50:08.416874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.751 12:50:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:40.751 528195908 00:44:40.751 12:50:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:40.751 104530185 00:44:40.751 12:50:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=661276 00:44:40.751 12:50:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 661276 /var/tmp/bperf.sock 00:44:40.751 12:50:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661276 ']' 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:40.751 12:50:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:41.010 [2024-12-13 12:50:08.488227] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:41.010 [2024-12-13 12:50:08.488268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661276 ] 00:44:41.010 [2024-12-13 12:50:08.561311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:41.010 [2024-12-13 12:50:08.583551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:41.010 12:50:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:41.010 12:50:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:41.010 12:50:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:41.010 12:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:41.269 12:50:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:41.269 12:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:41.528 12:50:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:41.528 12:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:41.788 [2024-12-13 12:50:09.246868] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:41.788 nvme0n1 00:44:41.788 12:50:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:41.788 12:50:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:41.788 12:50:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:41.788 12:50:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:41.788 12:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.788 12:50:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:42.047 12:50:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.047 12:50:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:42.047 12:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@25 -- # sn=528195908 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 528195908 == \5\2\8\1\9\5\9\0\8 ]] 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 528195908 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:42.047 12:50:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:42.307 Running I/O for 1 seconds... 00:44:43.245 21624.00 IOPS, 84.47 MiB/s 00:44:43.245 Latency(us) 00:44:43.245 [2024-12-13T11:50:10.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:43.245 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:43.245 nvme0n1 : 1.01 21625.21 84.47 0.00 0.00 5899.58 4930.80 15291.73 00:44:43.245 [2024-12-13T11:50:10.945Z] =================================================================================================================== 00:44:43.245 [2024-12-13T11:50:10.945Z] Total : 21625.21 84.47 0.00 0.00 5899.58 4930.80 15291.73 00:44:43.245 { 00:44:43.245 "results": [ 00:44:43.245 { 00:44:43.245 "job": "nvme0n1", 00:44:43.245 "core_mask": "0x2", 00:44:43.245 "workload": "randread", 00:44:43.245 "status": "finished", 00:44:43.245 "queue_depth": 128, 00:44:43.245 "io_size": 4096, 00:44:43.245 "runtime": 1.005863, 00:44:43.245 "iops": 21625.21138564596, 00:44:43.245 "mibps": 84.47348197517952, 00:44:43.245 "io_failed": 0, 00:44:43.245 "io_timeout": 0, 00:44:43.245 "avg_latency_us": 5899.578576157201, 00:44:43.245 "min_latency_us": 4930.80380952381, 00:44:43.245 "max_latency_us": 15291.733333333334 00:44:43.245 } 00:44:43.245 ], 00:44:43.245 "core_count": 1 00:44:43.245 } 00:44:43.245 12:50:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:43.245 12:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:43.506 12:50:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:43.506 12:50:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:43.506 12:50:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:43.506 12:50:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:43.506 12:50:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.506 12:50:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:43.765 12:50:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:43.765 [2024-12-13 12:50:11.435400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:43.765 [2024-12-13 12:50:11.435479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f69700 (107): Transport endpoint is not connected 00:44:43.765 [2024-12-13 12:50:11.436472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f69700 (9): Bad file descriptor 00:44:43.765 [2024-12-13 12:50:11.437473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:43.765 [2024-12-13 12:50:11.437482] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:43.765 [2024-12-13 12:50:11.437489] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:43.765 [2024-12-13 12:50:11.437497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:43.765 request: 00:44:43.765 { 00:44:43.765 "name": "nvme0", 00:44:43.765 "trtype": "tcp", 00:44:43.765 "traddr": "127.0.0.1", 00:44:43.765 "adrfam": "ipv4", 00:44:43.765 "trsvcid": "4420", 00:44:43.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:43.765 "prchk_reftag": false, 00:44:43.765 "prchk_guard": false, 00:44:43.765 "hdgst": false, 00:44:43.765 "ddgst": false, 00:44:43.765 "psk": ":spdk-test:key1", 00:44:43.765 "allow_unrecognized_csi": false, 00:44:43.765 "method": "bdev_nvme_attach_controller", 00:44:43.765 "req_id": 1 00:44:43.765 } 00:44:43.765 Got JSON-RPC error response 00:44:43.765 response: 00:44:43.765 { 00:44:43.765 "code": -5, 00:44:43.765 "message": "Input/output error" 00:44:43.765 } 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:43.765 12:50:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@33 -- # sn=528195908 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 528195908 00:44:43.765 1 links removed 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:43.765 12:50:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:44.025 12:50:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:44.025 12:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:44.025 12:50:11 keyring_linux -- keyring/linux.sh@33 -- # sn=104530185 00:44:44.025 12:50:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 104530185 00:44:44.025 1 links removed 00:44:44.025 12:50:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 661276 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661276 ']' 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661276 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661276 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661276' 00:44:44.025 killing process with pid 661276 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 661276 00:44:44.025 Received shutdown signal, test time was about 1.000000 seconds 00:44:44.025 00:44:44.025 Latency(us) 00:44:44.025 [2024-12-13T11:50:11.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:44.025 [2024-12-13T11:50:11.725Z] =================================================================================================================== 00:44:44.025 [2024-12-13T11:50:11.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 661276 00:44:44.025 12:50:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 661259 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661259 ']' 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661259 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:44.025 12:50:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661259 00:44:44.284 12:50:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:44.284 12:50:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:44.284 12:50:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661259' 00:44:44.284 killing process with pid 661259 00:44:44.284 12:50:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 661259 00:44:44.284 12:50:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 661259 00:44:44.543 00:44:44.543 real 0m4.304s 00:44:44.543 user 0m8.128s 00:44:44.543 sys 0m1.450s 00:44:44.543 12:50:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:44.543 12:50:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:44.543 ************************************ 00:44:44.543 END TEST keyring_linux 00:44:44.543 ************************************ 00:44:44.543 12:50:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:44.543 12:50:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:44.543 12:50:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:44.543 12:50:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:44.543 12:50:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:44.543 12:50:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:44.543 12:50:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:44.543 12:50:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:44.543 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:44:44.543 12:50:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:44.543 12:50:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:44.543 12:50:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:44.543 12:50:12 -- common/autotest_common.sh@10 -- # set +x 00:44:49.819 INFO: APP EXITING 00:44:49.819 INFO: killing all VMs 00:44:49.819 INFO: killing vhost app 00:44:49.819 INFO: EXIT DONE 00:44:53.111 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:53.111 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:53.111 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:55.647 Cleaning 00:44:55.647 Removing: /var/run/dpdk/spdk0/config 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:55.647 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:55.907 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:55.907 Removing: /var/run/dpdk/spdk1/config 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:55.907 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:55.907 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:55.907 Removing: /var/run/dpdk/spdk2/config 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:55.907 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:55.907 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:55.907 Removing: /var/run/dpdk/spdk3/config 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:55.907 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:55.907 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:55.907 Removing: /var/run/dpdk/spdk4/config 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:55.907 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:55.907 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:55.907 Removing: /dev/shm/bdev_svc_trace.1 00:44:55.907 Removing: /dev/shm/nvmf_trace.0 00:44:55.907 Removing: /dev/shm/spdk_tgt_trace.pid104089 00:44:55.907 Removing: /var/run/dpdk/spdk0 00:44:55.907 Removing: /var/run/dpdk/spdk1 00:44:55.907 Removing: /var/run/dpdk/spdk2 00:44:55.908 Removing: /var/run/dpdk/spdk3 00:44:55.908 Removing: /var/run/dpdk/spdk4 00:44:55.908 Removing: /var/run/dpdk/spdk_pid101998 00:44:55.908 Removing: /var/run/dpdk/spdk_pid103032 00:44:55.908 Removing: /var/run/dpdk/spdk_pid104089 00:44:55.908 Removing: /var/run/dpdk/spdk_pid104712 00:44:55.908 Removing: /var/run/dpdk/spdk_pid105634 00:44:55.908 Removing: /var/run/dpdk/spdk_pid105738 00:44:55.908 Removing: /var/run/dpdk/spdk_pid106776 00:44:55.908 Removing: /var/run/dpdk/spdk_pid106820 00:44:55.908 Removing: /var/run/dpdk/spdk_pid107166 00:44:55.908 Removing: /var/run/dpdk/spdk_pid108646 00:44:56.167 Removing: /var/run/dpdk/spdk_pid109897 00:44:56.167 Removing: /var/run/dpdk/spdk_pid110329 00:44:56.167 Removing: /var/run/dpdk/spdk_pid110490 00:44:56.167 Removing: /var/run/dpdk/spdk_pid110760 00:44:56.167 Removing: /var/run/dpdk/spdk_pid111044 00:44:56.167 Removing: /var/run/dpdk/spdk_pid111296 00:44:56.167 Removing: /var/run/dpdk/spdk_pid111542 00:44:56.167 Removing: /var/run/dpdk/spdk_pid111820 00:44:56.167 Removing: /var/run/dpdk/spdk_pid112540 00:44:56.167 Removing: /var/run/dpdk/spdk_pid115467 00:44:56.167 Removing: /var/run/dpdk/spdk_pid115716 00:44:56.167 Removing: /var/run/dpdk/spdk_pid115964 00:44:56.167 Removing: /var/run/dpdk/spdk_pid115974 00:44:56.167 Removing: /var/run/dpdk/spdk_pid116453 00:44:56.167 Removing: /var/run/dpdk/spdk_pid116462 00:44:56.167 Removing: /var/run/dpdk/spdk_pid116942 00:44:56.167 Removing: /var/run/dpdk/spdk_pid116956 00:44:56.167 Removing: /var/run/dpdk/spdk_pid117211 00:44:56.167 Removing: /var/run/dpdk/spdk_pid117417 00:44:56.167 Removing: /var/run/dpdk/spdk_pid117509 00:44:56.167 Removing: /var/run/dpdk/spdk_pid117698 00:44:56.167 Removing: /var/run/dpdk/spdk_pid118117 00:44:56.167 Removing: /var/run/dpdk/spdk_pid118288 00:44:56.167 Removing: /var/run/dpdk/spdk_pid118601 00:44:56.167 Removing: /var/run/dpdk/spdk_pid122430 00:44:56.167 Removing: /var/run/dpdk/spdk_pid126740 00:44:56.167 Removing: /var/run/dpdk/spdk_pid137198 00:44:56.167 Removing: /var/run/dpdk/spdk_pid137788 00:44:56.167 Removing: /var/run/dpdk/spdk_pid142069 00:44:56.167 Removing: /var/run/dpdk/spdk_pid142316 00:44:56.167 Removing: /var/run/dpdk/spdk_pid146504 00:44:56.167 Removing: /var/run/dpdk/spdk_pid152268 00:44:56.167 Removing: /var/run/dpdk/spdk_pid155013 00:44:56.167 Removing: /var/run/dpdk/spdk_pid165007 00:44:56.167 Removing: /var/run/dpdk/spdk_pid174101 00:44:56.167 Removing: /var/run/dpdk/spdk_pid176075 00:44:56.167 Removing: /var/run/dpdk/spdk_pid177017 00:44:56.167 Removing: /var/run/dpdk/spdk_pid193716 00:44:56.167 Removing: /var/run/dpdk/spdk_pid197714 00:44:56.167 Removing: /var/run/dpdk/spdk_pid279127 00:44:56.168 Removing: /var/run/dpdk/spdk_pid284408 00:44:56.168 Removing: /var/run/dpdk/spdk_pid290261 00:44:56.168 Removing: /var/run/dpdk/spdk_pid296611 00:44:56.168 Removing: /var/run/dpdk/spdk_pid296613 00:44:56.168 Removing: /var/run/dpdk/spdk_pid297500 00:44:56.168 Removing: /var/run/dpdk/spdk_pid298381 00:44:56.168 Removing: /var/run/dpdk/spdk_pid299200 00:44:56.168 Removing: /var/run/dpdk/spdk_pid300245 00:44:56.168 Removing: /var/run/dpdk/spdk_pid300252 00:44:56.168 Removing: /var/run/dpdk/spdk_pid300482 00:44:56.168 Removing: /var/run/dpdk/spdk_pid300705 00:44:56.168 Removing: /var/run/dpdk/spdk_pid300707 00:44:56.168 Removing: /var/run/dpdk/spdk_pid301603 00:44:56.168 Removing: /var/run/dpdk/spdk_pid302429 00:44:56.168 Removing: /var/run/dpdk/spdk_pid303228 00:44:56.168 Removing: /var/run/dpdk/spdk_pid303865 00:44:56.168 Removing: /var/run/dpdk/spdk_pid303871 00:44:56.168 Removing: /var/run/dpdk/spdk_pid304102 00:44:56.168 Removing: /var/run/dpdk/spdk_pid305278 00:44:56.168 Removing: /var/run/dpdk/spdk_pid306308 00:44:56.168 Removing: /var/run/dpdk/spdk_pid314285 00:44:56.168 Removing: /var/run/dpdk/spdk_pid343086 00:44:56.168 Removing: /var/run/dpdk/spdk_pid347485 00:44:56.168 Removing: /var/run/dpdk/spdk_pid349239 00:44:56.168 Removing: /var/run/dpdk/spdk_pid350899 00:44:56.427 Removing: /var/run/dpdk/spdk_pid351091 00:44:56.427 Removing: /var/run/dpdk/spdk_pid351296 00:44:56.427 Removing: /var/run/dpdk/spdk_pid351331 00:44:56.427 Removing: /var/run/dpdk/spdk_pid351822 00:44:56.427 Removing: /var/run/dpdk/spdk_pid353607 00:44:56.427 Removing: /var/run/dpdk/spdk_pid354351 00:44:56.427 Removing: /var/run/dpdk/spdk_pid354837 00:44:56.427 Removing: /var/run/dpdk/spdk_pid356951 00:44:56.427 Removing: /var/run/dpdk/spdk_pid357356 00:44:56.427 Removing: /var/run/dpdk/spdk_pid358053 00:44:56.427 Removing: /var/run/dpdk/spdk_pid362033 00:44:56.427 Removing: /var/run/dpdk/spdk_pid367298 00:44:56.427 Removing: /var/run/dpdk/spdk_pid367299 00:44:56.427 Removing: /var/run/dpdk/spdk_pid367300 00:44:56.427 Removing: /var/run/dpdk/spdk_pid371119 00:44:56.427 Removing: /var/run/dpdk/spdk_pid375044 00:44:56.427 Removing: /var/run/dpdk/spdk_pid380271 00:44:56.427 Removing: /var/run/dpdk/spdk_pid415775 00:44:56.427 Removing: /var/run/dpdk/spdk_pid419813 00:44:56.427 Removing: /var/run/dpdk/spdk_pid425710 00:44:56.427 Removing: /var/run/dpdk/spdk_pid426982 00:44:56.427 Removing: /var/run/dpdk/spdk_pid428275 00:44:56.427 Removing: /var/run/dpdk/spdk_pid429559 00:44:56.427 Removing: /var/run/dpdk/spdk_pid434048 00:44:56.427 Removing: /var/run/dpdk/spdk_pid438227 00:44:56.427 Removing: /var/run/dpdk/spdk_pid442169 00:44:56.427 Removing: /var/run/dpdk/spdk_pid449444 00:44:56.427 Removing: /var/run/dpdk/spdk_pid449546 00:44:56.427 Removing: /var/run/dpdk/spdk_pid454034 00:44:56.427 Removing: /var/run/dpdk/spdk_pid454257 00:44:56.427 Removing: /var/run/dpdk/spdk_pid454482 00:44:56.427 Removing: /var/run/dpdk/spdk_pid454926 00:44:56.427 Removing: /var/run/dpdk/spdk_pid454931 00:44:56.427 Removing: /var/run/dpdk/spdk_pid456418 00:44:56.427 Removing: /var/run/dpdk/spdk_pid458553 00:44:56.427 Removing: /var/run/dpdk/spdk_pid460130 00:44:56.427 Removing: /var/run/dpdk/spdk_pid461688 00:44:56.427 Removing: /var/run/dpdk/spdk_pid463292 00:44:56.427 Removing: /var/run/dpdk/spdk_pid465020 00:44:56.427 Removing: /var/run/dpdk/spdk_pid470760 00:44:56.427 Removing: /var/run/dpdk/spdk_pid471319 00:44:56.427 Removing: /var/run/dpdk/spdk_pid473023 00:44:56.427 Removing: /var/run/dpdk/spdk_pid474033 00:44:56.427 Removing: /var/run/dpdk/spdk_pid479632 00:44:56.427 Removing: /var/run/dpdk/spdk_pid482287 00:44:56.427 Removing: /var/run/dpdk/spdk_pid487385 00:44:56.427 Removing: /var/run/dpdk/spdk_pid492845 00:44:56.427 Removing: /var/run/dpdk/spdk_pid501879 00:44:56.427 Removing: /var/run/dpdk/spdk_pid508639 00:44:56.427 Removing: /var/run/dpdk/spdk_pid508691 00:44:56.427 Removing: /var/run/dpdk/spdk_pid527414 00:44:56.427 Removing: /var/run/dpdk/spdk_pid527879 00:44:56.427 Removing: /var/run/dpdk/spdk_pid528338 00:44:56.427 Removing: /var/run/dpdk/spdk_pid528945 00:44:56.427 Removing: /var/run/dpdk/spdk_pid529521 00:44:56.427 Removing: /var/run/dpdk/spdk_pid530184 00:44:56.427 Removing: /var/run/dpdk/spdk_pid530646 00:44:56.427 Removing: /var/run/dpdk/spdk_pid531119 00:44:56.427 Removing: /var/run/dpdk/spdk_pid535293 00:44:56.427 Removing: /var/run/dpdk/spdk_pid535514 00:44:56.427 Removing: /var/run/dpdk/spdk_pid541606 00:44:56.427 Removing: /var/run/dpdk/spdk_pid541659 00:44:56.427 Removing: /var/run/dpdk/spdk_pid547408 00:44:56.427 Removing: /var/run/dpdk/spdk_pid551539 00:44:56.427 Removing: /var/run/dpdk/spdk_pid560907 00:44:56.427 Removing: /var/run/dpdk/spdk_pid561575 00:44:56.427 Removing: /var/run/dpdk/spdk_pid565593 00:44:56.427 Removing: /var/run/dpdk/spdk_pid565990 00:44:56.427 Removing: /var/run/dpdk/spdk_pid569959 00:44:56.427 Removing: /var/run/dpdk/spdk_pid575565 00:44:56.687 Removing: /var/run/dpdk/spdk_pid577971 00:44:56.687 Removing: /var/run/dpdk/spdk_pid587848 00:44:56.687 Removing: /var/run/dpdk/spdk_pid596877 00:44:56.687 Removing: /var/run/dpdk/spdk_pid598506 00:44:56.687 Removing: /var/run/dpdk/spdk_pid599395 00:44:56.687 Removing: /var/run/dpdk/spdk_pid615225 00:44:56.687 Removing: /var/run/dpdk/spdk_pid618985 00:44:56.687 Removing: /var/run/dpdk/spdk_pid621618 00:44:56.687 Removing: /var/run/dpdk/spdk_pid629185 00:44:56.687 Removing: /var/run/dpdk/spdk_pid629190 00:44:56.687 Removing: /var/run/dpdk/spdk_pid634268 00:44:56.687 Removing: /var/run/dpdk/spdk_pid636581 00:44:56.687 Removing: /var/run/dpdk/spdk_pid638494 00:44:56.687 Removing: /var/run/dpdk/spdk_pid639724 00:44:56.687 Removing: /var/run/dpdk/spdk_pid641644 00:44:56.687 Removing: /var/run/dpdk/spdk_pid642686 00:44:56.687 Removing: /var/run/dpdk/spdk_pid651249 00:44:56.687 Removing: /var/run/dpdk/spdk_pid651699 00:44:56.687 Removing: /var/run/dpdk/spdk_pid652145 00:44:56.687 Removing: /var/run/dpdk/spdk_pid654366 00:44:56.687 Removing: /var/run/dpdk/spdk_pid654880 00:44:56.687 Removing: /var/run/dpdk/spdk_pid655431 00:44:56.687 Removing: /var/run/dpdk/spdk_pid659228 00:44:56.687 Removing: /var/run/dpdk/spdk_pid659234 00:44:56.687 Removing: /var/run/dpdk/spdk_pid660724 00:44:56.687 Removing: /var/run/dpdk/spdk_pid661259 00:44:56.687 Removing: /var/run/dpdk/spdk_pid661276 00:44:56.687 Clean 00:44:56.687 12:50:24 -- common/autotest_common.sh@1453 -- # return 0 00:44:56.687 12:50:24 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:56.687 12:50:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:56.687 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:44:56.687 12:50:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:56.687 12:50:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:56.687 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:44:56.687 12:50:24 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:56.687 12:50:24 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:56.687 12:50:24 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:56.687 12:50:24 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:56.687 12:50:24 -- spdk/autotest.sh@398 -- # hostname 00:44:56.687 12:50:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:56.946 geninfo: WARNING: invalid characters removed from testname! 00:45:18.885 12:50:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:20.264 12:50:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:22.167 12:50:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:24.072 12:50:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:25.977 12:50:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:27.882 12:50:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:29.788 12:50:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:29.788 12:50:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:29.788 12:50:57 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:29.788 12:50:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:29.788 12:50:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:29.788 12:50:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:29.788 + [[ -n 7546 ]] 00:45:29.788 + sudo kill 7546 00:45:29.799 [Pipeline] } 00:45:29.814 [Pipeline] // stage 00:45:29.818 [Pipeline] } 00:45:29.833 [Pipeline] // timeout 00:45:29.838 [Pipeline] } 00:45:29.851 [Pipeline] // catchError 00:45:29.857 [Pipeline] } 00:45:29.871 [Pipeline] // wrap 00:45:29.877 [Pipeline] } 00:45:29.890 [Pipeline] // catchError 00:45:29.899 [Pipeline] stage 00:45:29.901 [Pipeline] { (Epilogue) 00:45:29.913 [Pipeline] catchError 00:45:29.915 [Pipeline] { 00:45:29.927 [Pipeline] echo 00:45:29.929 Cleanup processes 00:45:29.935 [Pipeline] sh 00:45:30.223 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:30.223 672951 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:30.237 [Pipeline] sh 00:45:30.523 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:30.523 ++ grep -v 'sudo pgrep' 00:45:30.523 ++ awk '{print $1}' 00:45:30.523 + sudo kill -9 00:45:30.523 + true 00:45:30.535 [Pipeline] sh 00:45:30.820 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:43.046 [Pipeline] sh 00:45:43.337 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:43.337 Artifacts sizes are good 00:45:43.351 [Pipeline] archiveArtifacts 00:45:43.358 Archiving artifacts 00:45:43.799 [Pipeline] sh 00:45:44.118 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:44.168 [Pipeline] cleanWs 00:45:44.178 [WS-CLEANUP] Deleting project workspace... 00:45:44.178 [WS-CLEANUP] Deferred wipeout is used... 00:45:44.185 [WS-CLEANUP] done 00:45:44.187 [Pipeline] } 00:45:44.203 [Pipeline] // catchError 00:45:44.215 [Pipeline] sh 00:45:44.501 + logger -p user.info -t JENKINS-CI 00:45:44.510 [Pipeline] } 00:45:44.523 [Pipeline] // stage 00:45:44.528 [Pipeline] } 00:45:44.543 [Pipeline] // node 00:45:44.548 [Pipeline] End of Pipeline 00:45:44.592 Finished: SUCCESS